Leaders are expected to make decisions about AI, even when the term is used inconsistently. One reason is simple: “There is no generally accepted definition” of AI, and the variety of definitions can create confusion (Sheikh, Prins and Schrijvers, 2023). If AI is defined too broadly as “algorithms”, it can include everything from calculators to cookbooks, which is not useful (Sheikh, Prins and Schrijvers, 2023). If it is defined too strictly as full imitation of human intelligence, it can imply AI does not exist at present (Sheikh, Prins and Schrijvers, 2023).
This hot topic clarifies AI using explicit definitions from the sources, including Abbass’s “automation of cognition” definition (Abbass, 2021) and the European Commission AI High-Level Expert Group (HLEG) definition adopted by (Sheikh, Prins and Schrijvers, 2023).
Why Defining AI Is Harder Than It Sounds
Sheikh, Prins and Schrijvers (2023) are clear: “Defining AI is not easy” and there is “no generally accepted definition”. This is not just an academic problem. It affects real decisions. When people say, “we are using artificial intelligence”, they may be referring to very different things.
Definitions can fail at both extremes.
- Too broad. If Artificial Intelligence is defined simply as “algorithms”, the term becomes almost meaningless. Algorithms existed long before Artificial Intelligence and are used in many everyday tools. Sheikh and colleagues note that such a definition would include a pocket calculator or even a cookbook. That does not help leaders make informed decisions.
- Too strict. At the other end, Artificial Intelligence can be defined as computers fully imitating human intelligence. However, Sheikh et al. explain that this risks “defining the phenomenon out of existence”, because many current systems would not meet that standard.
For professionals, this matters because definitions shape scope. Are you discussing a specific tool designed for a defined task, or a broad claim about human-like intelligence? Clarity begins with choosing and stating your definition.
Two Useful Ways to Define AI (And Why Context Matters)
Abbass (2021), proposes guiding definitions to help clarify what should and should not be considered Artificial Intelligence. He offers the following definitions:
- Definition 1: “Artificial Intelligence is the automation of cognition.” (Abbass, 2021).
- Definition 2: AI is “social and cognitive phenomena” enabling machines to integrate socially, perform competitive tasks requiring cognitive processes, and communicate by exchanging high-information messages and shorter representations (Abbass, 2021).
Abbass also warns that no definition for AI will be error-free, sufficiently universal, or concisely unambiguous. (Abbass, 2021). For leaders, the lesson is practical: you can adopt a definition that suits your purpose, but you should acknowledge limits and avoid pretending your definition settles everything.
A Practical Definition Used in Policy
After reviewing multiple definitions, (Sheikh, Prins and Schrijvers, 2023) adopt what they describe as an “open definition”, based on guidance from the European Commission’s expert group on Artificial Intelligence: “Systems that display intelligent behaviour by analysing their environment and taking actions, with some degree of autonomy, to achieve specific goals.”
This definition is helpful because it is neither too broad nor too strict. It distinguishes Artificial Intelligence from general digital technology, while remaining flexible enough to include future developments.
However, the authors also explain that even this definition has limits. Phrases such as “some degree of autonomy” can be vague. They also show that some task-based definitions could technically apply to a thermostat, even though most people would not consider a thermostat to be Artificial Intelligence.
The message is not that definitions fail, but that they require careful use.
What People Often Mean When They Say “Artificial Intelligence”
Much of the recent progress in Artificial Intelligence relates to systems that learn patterns from data rather than simply following fixed rules; these systems are often described as “self-learning algorithms” that can recognise patterns in data (Duuren and Pous, 2020).
They also note that many people who talk about Artificial Intelligence today are referring specifically to these pattern-learning systems.
(Jaboob, Durrah and Chakir, 2024) describe Artificial Intelligence as an interdisciplinary field that combines computer science, mathematics, and cognitive psychology. They highlight areas such as systems that learn from data, systems that analyse and work with human language, systems that process images, and robotics. They also emphasise that Artificial Intelligence raises challenges, including bias and ethical concerns.
For professionals, this provides two grounding questions:
- What specific type of system are we using?
- What limitations or risks must we consider?
Narrow Artificial Intelligence and the Idea of General Intelligence
Sheikh, Prins and Schrijvers (2023) make an important distinction. Most current applications fall under what is often called “narrow” or “weak” Artificial Intelligence. These systems focus on specific capabilities, such as recognising images or processing speech.
This is contrasted with “artificial general intelligence”, which would involve understanding and simulating the full range of human intellectual skills. They note that most experts believe this is at least several decades away, if it is achieved at all.
For professionals, this distinction reduces noise. Most decisions today relate to focused systems designed for specific tasks, not machines that replicate human intelligence.
In your next Artificial Intelligence discussion, begin with: “When we say Artificial Intelligence, we mean…”. Agree on a clear working definition and use it consistently. A practical option is the definition used by (Sheikh, Prins and Schrijvers, 2023): systems that analyse their environment and take actions, with some degree of autonomy, to achieve specific goals. Ask teams to describe the specific type of system involved in plain language, rather than relying on the term Artificial Intelligence alone.
AI Clarity Framework: From Understanding to Informed Action
Artificial Intelligence is hard to define, and different definitions can create confusion (Sheikh, Prins and Schrijvers, 2023). Some definitions are too broad; others too strict. This task helps you turn “AI” from a buzzword into something concrete. In ten minutes, you will define what you mean, describe what the system actually does, and identify risks and assumptions before making decisions.
Use this task as a structured learning conversation whenever Artificial Intelligence is introduced into a discussion. Work through each step collaboratively and treat areas of uncertainty as opportunities to deepen understanding rather than obstacles to progress. If a question cannot be answered clearly, identify what needs to be explored further. The purpose is to build collective capability and informed judgement so that decisions about AI are thoughtful, responsible and aligned with organisational goals.
|
Step
|
Focus |
New Column |
What You Ask |
|
Establish Understanding |
Definition |
What do we mean by Artificial Intelligence in this discussion? |
A single, agreed working definition is stated clearly and used consistently. |
|
Define the Purpose |
Task clarity |
What specific problem or goal is this system designed to address? |
The system is described in plain language without relying on the term “AI” as a shortcut. |
|
Set Realistic Expectations |
Scope |
Is this focused on a specific capability, or is it attempting broader human-like intelligence? |
The group recognises that most systems focus on defined tasks and adjusts expectations accordingly. |
|
Clarify How It Works |
Mechanism |
Does it follow fixed rules, or does it learn patterns from data? |
The basic operating logic is explained clearly enough for non-technical stakeholders to understand. |
|
Inform the Decision |
Risk and responsibility |
What limitations, risks or ethical considerations should shape our decision? |
Bias, constraints and governance implications are acknowledged before approval or implementation. |