AI is often presented as a fast route to better decisions, smarter work and efficiency. The evidence is more cautious. Organisations may invest heavily but still report limited business gains, partly because implementation needs more than technology alone (Reim et al., 2020). AI can support knowledge management by speeding up information collection and interpretation, but it struggles with tacit knowledge and can amplify problems in decision-making rather than reduce them (Trunk et al., 2020). This means responsibility does not disappear when AI is introduced. It shifts. Leaders and teams need transparency about how outputs are produced, literacy to choose appropriate applications, and training to interpret results responsibly. Cultural alignment also matters, because AI changes work practices and can trigger resistance and ethical concerns.
What AI Is Used for in Organisations
AI is often discussed as a way to automate decision-making through human-like reasoning, and as a catalyst for business model innovation (Reim et al., 2020). In organisational decision contexts, research describes AI as particularly useful in knowledge management tasks such as the collection, interpretation, evaluation, and sharing of information, supporting speed, amount, diversity, and availability of data (Trunk et al., 2020). In practice, this positions AI as a support that can leverage the input and process stages of organisational decision-making, particularly by supporting the collection and interpretation of information where human processing capacity is limited (Trunk et al., 2020).
AI can also be used as decision support by producing insights from large and complex data sets that are compressed into a manageable scale (Reim et al., 2020). This can help teams handle complexity, but it does not remove the need for human judgement, particularly in strategic decisions made under uncertainty (Trunk et al., 2020).
Where Human Judgement Still Matters
For strategic organisational decisions, (Trunk et al., 2020) define decision making as group decision making under uncertainty and emphasise that while AI may support the process, the human decision group must take the final decision. One reason is that human decision-makers draw on both explicit and tacit knowledge, whereas lack of access to tacit knowledge and reliance on historical data are limiting factors of AI (Trunk et al., 2020). This matters because many real decisions rely on experience, context, and stakeholder negotiation, not only on structured data.
The evidence also warns against assuming AI automatically improves decision quality. (Trunk et al., 2020) argue AI can amplify problems inherent in the decision-making process rather than reduce them. AI outputs can be hard to cross-examine, and it may be unclear how the system arrives at a given output (Trunk et al., 2020). This creates a practical requirement for people to translate and interpret results, not simply accept them.
Transparency and Trust Are Not Optional
A central implementation issue is transparency, sometimes discussed as the “black-box” issue, where traceability is impaired because AI can combine multiple technologies at different abstraction levels (Reim et al., 2020). (Reim et al., 2020) link this directly to trust, arguing people are less likely to trust an AI application if they do not understand how it operates. (Trunk et al., 2020) similarly highlight the importance of transparency and data flow context for reaching decisions and argue training and continuous experience increase trust and effectiveness.
A practical implication follows. If people cannot understand, question, or explain how an AI-supported conclusion was produced, they will struggle to use it responsibly, especially when decisions affect stakeholders.
Data Readiness and Digital Processes Shape Outcomes
Reim, Åström and Eriksson (2020) state that digital processes can be a prerequisite when implementing AI because data acquisition mechanisms are crucial and AI algorithms require large amounts of qualitative data. They also describe the “garbage in–garbage out” phenomenon, where insufficient data sets negatively impact output (Reim et al., 2020). This aligns with Trunk, Birkel and Hartmann’s (2020) emphasis that AI depends on the goal it is used for and the data it has available, and that decision quality depends on the application, resources, input, and human interpretation ability.
AI Changes Roles, Skills, And Culture
Based on the evidence, AI changes the human role rather than removing it. (Trunk et al., 2020) describe humans becoming supervisors and, more specifically, translators and interpreters of AI results, with increased responsibility and different skill needs. They also argue that education is necessary because the capabilities needed to use AI differ from traditional machines (Trunk et al., 2020).
Murire (2024) frames AI as reshaping organisational work practices and influencing culture, including changes linked to automation, decision-making, and employee roles, alongside challenges such as resistance to change and ethical concerns. Murire (2024) also identifies leadership, transparent communication, and investments in skills development as pivotal strategies for overcoming obstacles to successful AI implementation.
Ethics and Accountability: Progress in Principle, Challenges in Practice
Research on AI governance finds that most frameworks agree on the same core principles: fairness, transparency, accountability, explainability and sustainability. However, it also shows that these principles are put into practice very differently across countries and sectors. For the UK context, this matters because the EU AI Act is described as a more binding, risk-based approach with central oversight while other approaches rely more on high-level ethical guidance without statutory implementation. Overall, the message is clear: we know what responsible AI governance looks like, but its consistent application and enforcement remain uneven, and common risks such as algorithmic bias and fragmented regulation continue to surface (Ismail & Ahmad, 2025).
Make AI a thinking partner, not the final decision-maker. Before acting on any AI output, pause and run a five-minute ‘Decision Support Check’ as a team. Agree on the decision you are trying to improve, identify what data the system relied on and what it may have missed, test whether you can clearly explain and justify the recommendation, and define where professional judgement, experience and stakeholder insight must shape the final call. The outcome should be clear: a documented decision that combines AI insight with accountable human reasoning.
AI Decision Readiness and Responsibility Test
AI can support information processing and decision-making, but it can also amplify bias, reduce transparency, and increase human responsibility. This activity helps you decide whether AI should be used in a specific workplace decision and whether your organisation is ready to apply it responsibly, transparently, and effectively.
Select one real workplace decision or process where AI could be introduced. Complete the table honestly and in writing. At the end, you must produce a clear conclusion: proceed, pilot cautiously, or pause and strengthen conditions first. The aim is not reflection alone. It is a defensible leadership judgement grounded in research.
|
Key Attribute
|
Know It (What You Understand) |
Check It (What You Verify) |
Use It (What You Will Do Next) |
|
Decision Clarity |
What decision are we making? |
Is this a high-risk or complex decision where people must stay accountable? |
If yes, use AI for insight only. Final call stays with the team. |
|
Data Quality |
What information is the AI using? |
Is the data accurate, complete and up to date? |
If weak, improve data before relying on AI output. |
|
Human Insight |
Where does experience or context matter? |
What might the system not understand about people, culture or nuance? |
Add structured discussion before finalising the decision. |
|
Explainability |
Can we clearly explain the AI’s recommendation? |
Would we feel confident defending this decision publicly? |
Do not act on outputs you cannot explain. |
|
Human Accountability |
Who owns the final decision? |
Is that responsibility clearly agreed upon? |
Record who decided and why. |
|
Understanding the Tool |
Do we understand what this tool can and cannot do? |
Has the team had enough training? |
Provide targeted training before scaling use. |
|
Team Readiness |
How might this affect roles or confidence? |
Are people anxious or resistant? |
Communicate purpose and invite feedback before implementation. |
|
Fairness and Risk |
Could this decision disadvantage certain groups? |
Could bias in the data affect the outcome? |
Pause and review risks before proceeding. |
|
Process Fit |
Does this process actually need redesign? |
Are we trying to fix a weak process with a new tool? |
Improve the process before adding AI. |
|
Final Judgement |
Have we reviewed all the above? |
Does this decision still feel sound after discussion? |
Decide: Proceed, Pilot carefully, or Pause. Record your reasoning. |