The pattern is familiar enough by now. An organisation invests in an AI use case. The pilot works. Results are promising. Leadership signals confidence. Then the scale-up stalls.
Timelines stretch. Adoption is patchy. The productivity gains that looked so clear on paper fail to materialise at the expected rate. A technology problem is assumed, and the search for a better tool begins.
Seasoned executive Athalie Williams has watched this play out across organisations of considerable scale and sophistication. Her diagnosis is consistent.
“At that point it is rarely a technology problem,” she says. “It is a human one.”
Williams is an international executive and former Chief HR Officer of BT Group (British Telecommunications) and Chief People Officer of BHP, with more than three decades of experience leading large-scale change across some of the world’s most complex organisations.
Her perspective on AI transformation is grounded not in the technology itself, but in what she has observed about how organisations succeed or fail in translating investment into lasting performance.
The Groundwork That Gets Skipped
The gap she identifies is not about capability or budget. It is about sequence. Organisations that struggle to scale AI typically have not done the foundational work with their people before the technology arrives.
“Many have not done the groundwork with their workforce to bring people along, build understanding and create the buy-in needed to evolve the design of work, processes and behaviours,” she says. “Without that, the conditions for success simply are not there.”
This is not a soft observation. It has direct consequences for return on investment, delivery timelines and the credibility of transformation programmes at the board level.
When people do not understand what is changing, why it is changing or what it means for their role, they do not resist in dramatic ways. They do not adopt.
Workarounds persist. Old habits hold. The technology sits atop unchanged behaviours and yields underwhelming results.
The investment case presented to the board assumed a different outcome. That gap between expectation and reality is rarely examined honestly.
A Three-Legged Stool
She uses a simple framework to describe what balanced AI transformation actually requires: a three-legged stool made up of productivity, quality and humanness.
Each leg is necessary. Remove any one of them, and the whole thing becomes unstable.
“In practice, you can feel when one leg is missing,” she says. “Productivity, quality and humanness all need to be in balance for transformation to stick.”
Productivity and quality are the legs most organisations focus on. They are measurable, reportable and relatively straightforward to define.
Humanness is harder to quantify, which is perhaps why it tends to receive less disciplined attention. But she is clear that it is not a secondary concern.
Humanness in this context is not about sentiment or wellbeing programmes. It is about the practical conditions that allow people to engage with change: understanding what is being asked of them, trust in the direction being set, and genuine involvement in shaping how their work evolves.
When those conditions are present, adoption accelerates. When they are absent, even capable technology delivers less than it should.
What Leaders and Boards Should Be Asking
The implication for executive teams is practical. Before scaling any material AI investment, the question is not only whether the technology is ready. It is whether the organisation is.
“Reskilling and upskilling are essential, but they need to be grounded in clarity about the capabilities the organisation truly needs,” she says.
“People need transparency about what is changing and why. They also need clear pathways and support to move into new roles.”
That transparency extends to honest conversations about the scale of change ahead.
At BT Group (British Telecommunications), she was part of an executive team that chose to be direct with its workforce and union partners about the workforce shifts the organisation’s transformation would require over the coming years. It was not a comfortable conversation.
However, it created the conditions for a more honest and productive dialogue than a more cautious approach would have allowed.
For boards, the question is whether workforce strategy, AI readiness and culture are being treated with the same rigour as technology investment.
People risk is business risk. Organisations that scale AI successfully tend to be the ones whose boards are asking for both the technology roadmap and the human architecture plan alongside it.
When It Works
She points to a clear signal of what good looks like. It is not the absence of disruption or resistance. It is whether the organisation has been deliberate about bringing people into the change, rather than announcing it at them.
“Where AI genuinely helps is when it removes friction,” said Williams
“You can feel the difference in teams when the noise drops and people can focus on the work that actually matters. When people understand the change and feel part of it, well-being improves,” she said.
That experience — a team that has been brought along and can feel the difference the change is making — is the outcome that justifies the investment. It does not happen by accident. It is the result of leaders who understood from the outset that the technology was only one leg of the stool.
“AI will not create balance on its own. Leadership choices will,” concluded Williams
For organisations still wondering why their AI transformation has not delivered what the pilot promised, that may be the most useful place to start.
