Artificial intelligence is increasingly shaping organisational work practices and culture. Research describes AI as a transformative force that is reshaping traditional work practices, automating routine tasks, enhancing decision-making, and driving innovation (Murire, 2024). These developments can improve efficiency, productivity, and the ability to focus on more value-added activities (Murire, 2024).
However, embedding AI into ways of working is not just a technical task. Evidence suggests that AI adoption is also a behavioural and change-management challenge, and many initiatives focus too heavily on the systems themselves while assuming that people will simply adapt (Schweitzer et al., 2026). This can create resistance, uncertainty, and cultural misalignment if organisations do not manage the human side of change carefully (Murire, 2024).
The research therefore points to a broader view of AI adoption: one that combines technology with leadership, communication, skills development, trust, and human involvement throughout implementation (Murire, 2024; Schweitzer et al., 2026; Xin et al., 2021).
Embedding AI into organisational ways of working means more than introducing new tools. It involves changing how work is carried out, how decisions are supported, how employees see their roles, and how organisational culture responds. Research shows that AI is influencing work practices and triggering cultural shifts across organisations (Murire, 2024). It is being used to automate routine tasks, enhance decision-making processes, and drive innovation, while also changing workflows and employee expectations (Murire, 2024).
One of the clearest opportunities is greater efficiency. AI-driven automation can streamline processes, reduce manual errors, and allow employees to focus on more value-added activities. This means that embedding AI successfully is closely linked to how organisations redesign work. If AI is treated only as a technical add-on, its effect on everyday work may remain limited. If it is built into workflows, resource allocation, and decision processes, it can contribute more directly to how work happens.
At the same time, the evidence makes clear that AI adoption is not simply an engineering exercise. Schweitzer et al. (2026) argue that leading AI adoption is a behavioural exercise where change management principles are needed. Their work warns that many business initiatives focus predominantly on the AI systems themselves, assuming humans will fall in line. This matters because adoption affects people at all stages of implementation, including design, adoption, and management (Schweitzer et al., 2026). In practice, this means organisations need to think carefully about how employees will understand, accept, and work with AI rather than assuming the technology alone will deliver change.
Murire (2024) reinforces this point by showing that AI integration can lead to resistance, particularly when employees fear job displacement, insecurity, or changing responsibilities. Ethical concerns can also complicate adoption, especially concerning privacy, transparency, and algorithmic bias (Murire, 2024). These issues are not separate from ways of working. They shape whether employees trust AI, whether cultural norms support its use, and whether organisations can align AI initiatives with broader values and goals.
Leadership, therefore, becomes central. Murire (2024) identifies effective leadership, transparent communication, and investments in skills development as pivotal strategies for successful implementation. Leaders are expected to articulate why AI is being introduced, how it fits organisational goals, and how employees will be supported through change (Sarioguz and Miser, 2024). Without this, AI may be seen as a threat rather than an opportunity. Transparent communication helps address uncertainty, while skills development helps close the gaps that often prevent adoption.
Skills and talent matter because AI changes the competencies organisations need. Murire (2024) highlights the importance of data science, machine learning, and AI-related expertise, but also points to the need for wider upskilling and reskilling. This suggests that embedding AI into ways of working is also a learning challenge. Organisations need not only technical specialists, but also employees who can work confidently alongside AI-enabled systems and adapt to new ways of operating.
The research by Xin et al. (2021) adds an important practical insight about automation. Although their study focuses on AutoML, its findings are highly relevant to embedding AI into work more broadly. They found that practitioners do not use automated tools as “push-button, one-shot solutions” and that humans remain valuable contributors, mentors, and supervisors who improve efficiency, effectiveness, and safety (Xin et al., 2021). They argue that the goal should not be to completely remove the user from the process, but to build human-compatible tools that create trust, understanding, and a sense of agency.
This is especially useful when thinking about everyday organisational work. It suggests that embedded AI works best when people stay meaningfully involved. Trust does not come from automation alone. Xin et al. (2021) note that transparency alone does not suffice for trust and understanding, and that humans need agency in order to trust tool-built outcomes. This means that organisations should not assume that more automation automatically produces better adoption. Human control and automation need to be balanced in ways that match real work practices.
End-to-end integration also matters. Xin et al. (2021) argue that a solution that handles all stages of the workflow in a single environment is the optimal design choice, while also showing that many current tools automate only one stage and leave users to do significant manual work elsewhere. This points to a broader lesson: embedding AI is stronger when it connects with the full flow of work rather than solving one isolated task.
Overall, the evidence suggests that embedding AI into organisational ways of working requires a combined focus on workflow, culture, behaviour, leadership, and human involvement. AI can support efficiency, productivity, and innovation, but lasting adoption depends on whether organisations align it with how people actually work, learn, and adapt (Murire, 2024; Schweitzer et al., 2026; Xin et al., 2021).
Choose one important workflow and ask three questions: what is being automated, how are people expected to work differently, and what support is in place to help them adapt? Then, review whether communication, leadership, and skills development are strong enough to support the change. AI is more likely to embed successfully when it is aligned with culture, supported by people-focused implementation, and designed to preserve trust, understanding, and human agency (Murire, 2024; Schweitzer et al., 2026; Xin et al., 2021).
AI Governance in Practice: Risk, Responsibility and Leadership Judgement
AI governance is not defined by policies alone, but by how risk, responsibility, and judgement are applied in everyday decisions. Research highlights that effective AI governance requires clear accountability, transparency, ethical oversight, and alignment with organisational processes (De Almeida et al., 2021; Manda et al., 2025; Mahmood, 2026). This checklist focuses on what is practically in place, helping to assess whether governance is active, visible, and influencing how AI is used in real organisational contexts.
Use this checklist to review current AI use across your organisation or team. For each area, identify whether clear evidence exists. Where evidence is limited or inconsistent, this signals a governance gap rather than a technical issue. Prioritise strengthening clarity, accountability, and oversight before expanding AI use. Effective governance is demonstrated through consistent decision-making, not just documented intentions.
|
Focus Area |
Key Questions |
What Evidence Do You Have? |
What To Do Next |
|
Clarity of Use |
What is AI being used for, and is its purpose clearly defined? |
Documented use cases? Clear intended outcomes? |
Clarify and narrow use before expanding |
|
Risk Awareness |
What risks could arise from this AI use (e.g. bias, errors, unintended outcomes)? |
Risk assessments? Identified scenarios? |
Identify and document key risks early |
|
Decision Ownership |
Who is accountable for decisions supported or influenced by AI? |
Named decision owners? Clear accountability structures? |
Define ownership and escalation routes |
|
Human Oversight |
Where do people review, challenge, or override AI outputs? |
Defined checkpoints? Evidence of human intervention? |
Build in clear oversight and review stages |
|
Transparency |
Can AI-supported decisions be explained to those affected? |
Clear explanations? Supporting documentation? |
Improve clarity and communication |
|
Workflow Integration |
Is AI embedded into processes, or used separately from them? |
AI reflected in workflows? Process documentation? |
Integrate AI into standard ways of working |
|
Consistency of Use |
Is AI applied consistently across teams and activities? |
Shared guidance? Variations in practice? |
Standardise approaches where needed |
|
Capability and Skills |
Do people understand how to use AI appropriately and responsibly? |
Training provided? Evidence of confidence or gaps? |
Strengthen capability and practical understanding |
|
Monitoring and Review |
How is AI performance reviewed over time? |
Regular reviews? Identified issues or improvements? |
Introduce ongoing monitoring and feedback loops |
|
Governance in Practice |
Are governance principles actively shaping decisions? |
Evidence of challenge, adaptation, or intervention? |
Move from policy to consistent application |