Definitions for AI are various and contested.

Though eager to focus on tangible and practical applications of these technologies, it’s worthwhile first outlining some of those definitions and the dialogue surrounding them to give you a framework to understand the rest of the module.

  • Some define AI as the imitation of human learning, comprehension, problem solving, decision making, creativity, and autonomy by machines and computers (Stryker, & Kavlakoglu, 2024). But as Sheikh et al (2023) note, this is limited as it rules out most contemporary applications which are focused on more simplistic tasks and therefore don’t imitate the full breadth of human intelligence.

  • Others define AI by the tasks it can perform in a given environment(s). For example, the High-Level Expert Group on Artificial Intelligence (AI HLEG, 2018) of the European Commission (EC) consider that AI “refers to systems that display intelligent behaviour by analysing their environment and taking actions”. Some contest that this definition omits the need for some types and applications of AI to understand more internal processes, such as consciousness, reasoning, or emotion.

  • The UK Government White Paper on AI (2023) defines AI in reference to two functional capabilities which require a regulatory response. Firstly ‘Adaptivity’; some AI models can perform new forms of inference. And secondly, ‘Autonomy’; some models can make decisions without the explicit interference of a human. This focus on adaptivity and autonomy however may emphasise traits that are not universally applicable to all AI systems, for example a rule-based system can be neither adaptive nor autonomous.

Central to understanding an AI model as something capable of solving problems, or imitating human problem-solving ability, is a basic grasp of algorithms and machine learning.

In simple terms, an algorithm is a set of rules that defines how to transform an input into an output, much like a recipe. It follows a step-by-step procedure to solve a specific problem: given input X, the algorithm processes it according to its programmed instructions and generates output Y (Prabhu, 2023). For example, in computer programming, the input might be a user clicking a tab in their internet browser. The system follows a series of procedures coded into the algorithm, which then produces the correct output—opening a specific webpage.

This is a simple, working definition, which could be nuanced - algorithms are indeed formalized, step-by-step computational procedures (like recipes), which can be either deterministic or probabilistic/randomised depending on their use case, so while some algorithms always produce the same output for a given input (deterministic algorithms), others incorporate randomness or probability, meaning their output may vary even when given the same input (probabilistic or randomised algorithms). Deterministic algorithms, such as those used in traditional arithmetic operations or sorting functions, follow a fixed sequence of steps that guarantee a consistent result. In contrast, probabilistic or randomised algorithms, like those used in machine learning and artificial intelligence, introduce elements of randomness—either in how they explore possible solutions or how they approximate complex patterns. This distinction is particularly relevant in AI applications, where algorithms must often balance efficiency, accuracy, and adaptability to function effectively in uncertain or dynamic environments.

A lot of what people are talking about when they discuss AI is machine learning. Machine learning is a subset of AI where data and algorithms are used to create systems which can imitate the human learning process. We are able to identify patterns in information presented to us and make predictions based on that information. Similarly, machine learning systems identify patterns in data, and use those patterns to make predictions.

Continue to the next part of the module to discover more about different types of machine learning.