The federal government could “explore” using artificial intelligence programs to write sensitive cabinet submissions or business cases, as part of a major initiative to embed AI across the public service, despite concerns about the technology increasing risks of security and data breaches.
The government will also require departments to appoint a chief AI officer, streamline procurement of artificial intelligence services and move to a whole-of-government cloud policy to allow more adoption of the technology
Simon Kriss, one of Australia’s leading AI strategists and the CEO and co-founder of Sovereign Australia welcomes the move toward wider AI adoption in government but cautions that public trust depends on keeping all data and infrastructure sovereign.
“AI will form a central part of government operations in the near future, and the initiative taken by the government to ensure it has reusable technology that is model-agnostic is laudable. The public will be cautious, and it has a right to be.,” said Kriss
“Today, we are constantly being asked to place our trust in a model that is built offshore and where we have zero transparency into what data was used, how the data was curated, how the model was biased and so much more.”
“That is why it is essential that the Federal Government’s data never leaves our shores. We need each part of the AI chain (inference, storage etc.) to be Australian owned and controlled,” he said
The finance minister, Katy Gallagher, recently announced the public service would build its own special AI program for government workers, spruiking productivity benefits for rolling out generative programs such as ChatGPT, Copilot and Gemini to departments.
“The government is adopting AI, and that’s a good thing. Trust is the biggest factor when it comes to AI. It is essential that the Federal Government’s data never leaves our shores.,” said Kriss
“We need each part of the chain to be Australian owned and controlled, or we risk foreign governments being able to access Australian data, such as with the United States CLOUD Act. It’s the only way we make sure our data stays safe,” he said.
Many public servants said AI trials had boosted productivity, but others warned of errors in AI-generated work, risks to entry-level jobs, and public mistrust of automation after the robodebt scandal.
Most public servants who took part in the AI trials responded positively, with managers and executives in particular reporting productivity boosts.
Others’s say AI tools helped them summarise information, draft documents, and find answers more quickly—saving up to an hour a day. According to the survey, 69% said AI helped them work faster and 61% said it improved their work quality.
But there were also drawbacks. Sixty percent said they had to make “moderate to significant” edits due to inaccuracies in AI output, and some raised concerns about Copilot’s unpredictability and limited understanding of context.
The government’s AI plan aims to ensure every public servant receives training and access to generative AI tools. A new platform, GovAI Chat, is set for rollout in early 2026, alongside guidance for using public AI systems like ChatGPT, Claude, and Gemini for officially classified information.
