While 90 percent believe employees are using artificial intelligence in their organisation, only 22% say AI return on investment (ROI) has met or exceeded their expectations, according to ISACA’s 2026 AI Pulse Poll.
The ability to respond to AI-related incidents remains uneven, with more than half (56 percent) unsure how long it would take to halt an AI system due to a security incident, while 39% do not know whether their organisation has a documented process for shutting down or overriding AI systems if things go wrong.
Half of Oceania respondents say Boards and executive leadership are ultimately accountable if AI systems cause harm or serious error in their organisation.
Examining trends in AI use, policies and standards, workforce impact, incident response readiness and security across the digital trust profession, the global study surveyed more than 3,400 professionals across IT audit, governance, cybersecurity, privacy and emerging technology roles.
While AI policies are becoming more commonplace, only 38% of organisations have a formal, comprehensive AI policy – up from 28 percent in 2025. Thirty percent say they have a limited policy in place, and a quarter (25 percent) have no active policy.
There also appears to be some uncertainty around the ROI for AI:
- 23 percent say they believe it is too early to tell the ROI
- 22 percent say they do not know the ROI
- 20 percent cite limited ROI so far
- Only 22 percent indicate AI ROI has met or exceeded their expectations.
Jamie Norton, Vice Chair, ISACA Board, said the findings show AI is no longer sitting solely with IT teams, but has become a governance and leadership issue.
“What we’re seeing now is a shift from experimentation to accountability,” said Mr Norton.
“Organisations are moving quickly to embed AI into operations, but many are still developing the policies, governance structures and skills needed to ensure those systems deliver long-term value safely and responsibly.
“The research also shows AI-related risk is now an immediate organisational priority for organisations across Oceania, while many are still seeing only limited ROI from AI initiatives,”
“It highlights the growing pressure on leaders to balance innovation with governance, oversight and measurable business outcomes.” he said.
Increased use, demand for AI skills
The poll found that AI use has become expected and is embedded across the enterprise. Respondents indicate they are leveraging it most for:
- Increasing productivity (62 percent)
- Creating written content (62 percent)
- Automating repetitive tasks (50 percent)
- Analysing large amounts of data (49 percent)
Most respondents note that AI literacy is vital, with 78 percent saying AI skills are very or extremely important to their profession, up from 72 percent last year. This high demand for AI skills on the job is also reflected by 33 percent saying that their organisations train all employees on AI, up from 22 percent in 2025.
And while 36 percent say their organisation will increase AI-related jobs in the next 12 months, up from 31 percent in 2025, workloads do not appear to be decreasing due to AI. Nearly seven in 10 say job responsibilities have increased or have not changed in the last year.
Areas for improvement
Respondents are also concerned about AI risk, with 45 percent noting that AI risks are an immediate priority, and 38 percent saying they are confident in their board’s understanding of and action against AI risks. Respondents’ most-cited AI risks include:
- Misinformation and disinformation (82%)
- Privacy violations (74%)
- Social engineering (60%)
- Loss of intellectual property (58%)
- Job displacement (42%)
Additionally, detection capability is improving, but trust remains fragile. Forty-one percent say they are confident in their own ability to detect AI‑powered misinformation, up from 30 percent in 2025. Meanwhile, only 36 percent are confident in their organisation’s ability to detect AI-powered misinformation.
Beyond practical, workplace implications, there are also bigger-picture, societal questions to consider. Seventy-seven percent also say that they consider the environmental concerns associated with using AI within their organisation.
And only 11% strongly agree that organisations are giving sufficient attention to ethical standards related to AI implementation.
AI Resources, Training
To meet the needs of digital trust professionals seeking the training, knowledge and best practices to keep pace in the age of AI, ISACA offers a range of AI courses and resources, as well as three new credentials: Advanced in AI Audit (AAIA), Advanced in AI Security Management (AAISM) and Advanced in AI Risk (AAIR).
