Reassessing Current Approaches to AI
A growing body of evidence shows that using AI is not the same as using it responsibly. Without the right checks in place, AI systems can reproduce patterns of inequality, particularly when trained on historical datasets that encode existing biases (Cowgill, 2019). Acker’s (2006) theory of inequality regimes illustrates how organisations routinely reproduce hierarchies through everyday norms and routines; AI, when deployed uncritically, may reinforce these dynamics by replicating patterns hidden within data.
Furthermore, many organisations assume AI is naturally objective. However, research in explainable AI highlights that transparency and intelligibility are essential for meaningful interpretation and challenge (Miller, 2019). When employees do not understand how algorithmic decisions are made or cannot question them, perceptions of fairness and trust decline (Kellogg et al., 2020). These risks raise a practical question for organisations: who is accountable for how AI decisions affect fairness, trust, and everyday experience at work?
HR as the AI Governance Nerve-Centre
Researchers increasingly see HR as central to responsible AI use. HR teams shape hiring, performance, learning and reward systems - all areas in which AI use is rapidly expanding (Gal, Jensen & Stein, 2020). However, studies show many HR professionals have limited understanding of how AI systems work in practice, making it difficult for them to evaluate data quality, assess potential bias or challenge design decisions made by technical teams (Tambe, Cappelli & Yakubovich, 2019).
To address this, research recommends HR take a leading role in shaping how AI is governed across the organisation. This begins with frameworks which clarify the purpose of AI systems, defining decision rights and acceptable use in line with international g uidance for trustworthy workplace AI (Del Pero and Verhagen, 2023). HR should also help establish cross functional governance boards that bring together HR, legal, data science and operational leaders to oversee implementation. Equally important is HR’s responsibility to protect employee voice. As AI becomes embedded in workplace decisions, employees need mechanisms which allow them to question or contest AI-driven outcomes without fear of negative consequences (Kellogg et al., 2020). Finally, sustained capability building is required, particularly in areas such as fairness, explainability, model drift and risk assessment, so that leaders can properly assess AI use rather than relying on surface level assurance. Taken together, these responsibilities mark a shift in HR’s role: from being a passive user of AI tools to an active architect of ethical workforce design.
Leadership Responsibilities in Human–AI Teams
As organisations begin using AI to support, not just automate, decision-making, the nature of leadership responsibility will begin to shift. This shift is not only technical but behavioural and ethical, reflecting how AI systems interact with people in practice. Research on machine behaviour shows that AI systems interact with human cognition and social context in complex ways, redistributing moral and interpretive work (Rahwan et al., 2019). For leaders, this has practical consequences. It means being able to make sense of what AI is suggesting, communicate those suggestions clearly to others and take responsibility for decisions that cannot be left to systems or data alone.
Three areas are especially critical:
Fairness. Algorithms learn from historical data and can reproduce existing inequities. Leaders must interrogate underlying assumptions by asking who benefits, who is disadvantaged, and which patterns are being reinforced when outputs are used in practice (Cowgill, 2019). Organisational justice research shows that perceptions of fairness strongly influence trust and legitimacy (Colquitt et al., 2013).
Explainability. Employees require intelligible explanations of how an AI system reached a conclusion, especially where decisions affect opportunities, workload or performance. Explainability enables challenge and informed consent (Miller, 2019). In practice, this requires leaders to ensure that AI informed decisions can be explained in plain language, that the main factors influencing an outcome are documented and that individuals are told what influenced the decision and how it can be questioned.
Contestability. Without formal rights to question and override AI recommendations, decisions risk becoming unchallengeable. Contestability mechanisms, including human-in-the-loop checkpoints and documented escalation routes, are essential for protecting autonomy and ensuring accountability (Jarrahi et al., 2021).
These responsibilities emphasise that leadership authority does not diminish with AI adoption. Instead, it expands into new areas which will require ethical fluency and systems thinking.
Building Future Governance Capacity
The evolution of AI in workplace settings requires organisations to develop broader governance capabilities. Research highlights the need for algorithmic literacy, ethical decision-making, cross-functional coordination and continuous monitoring to detect drift (when models change over time and become less accurate or fair) and unintended consequences (impacts the system produces that were not planned or expected) (Cummings, 2010). Further research similarly stresses embedding reflective, ongoing capability development within organisational routines (Bondarouk, Parry & Furtmueller, 2017). Taken together, these insights suggest that responsible AI is integral to organisational resilience. Sustainable adoption demands attention not only to strategy and performance, but to fairness, voice, meaning and structural design.
Final Thought
AI is reshaping work in profound ways, but its long-term value depends on how thoughtfully it is governed. Productivity gains are only part of the story. Organisations that prioritise transparency, employee voice, ethical reasoning and deliberate system design will be best positioned to lead responsibly; ensuring AI strengthens both performance and the employee experience.
Action Point
Select one AI-enabled process in your organisation and review it through a governance lens. Identify where transparency is unclear, whose voice is absent and where bias or unintended consequences may arise. Engage employees in testing assumptions and introduce one improvement - such as a clearer explanation, a challenge-rights mechanism or a human review step. Responsible AI begins with small, intentional shifts in how decisions are designed and overseen.