Australian government agencies are failing to meet transparency expectations around their use of automated decision-making technologies, according to a new report from the nation’s information watchdog.
Released on Wednesday by the Office of the Australian Information Commissioner, the findings stem from an October review examining how openly federal agencies disclose their use of automated decision-making (ADM) systems.
ADM refers to decisions made or assisted by computer programs, including systems that incorporate artificial intelligence.
The review assessed whether 23 de-identified Commonwealth agencies were meeting their disclosure obligations under the Information Publication Scheme (IPS), which is designed to ensure agencies proactively publish information about how they operate.
While all agencies reviewed are authorised to use ADM under various legislative frameworks, only 13 explicitly referenced the technology in their IPS materials.
Just four agencies — the Australian Taxation Office, Services Australia, the Department of Health, Disability and Aging, and the Department of Veterans’ Affairs — clearly disclosed the use of ADM in decisions that directly affect members of the public.
The remaining nine agencies either implied or indirectly referenced ADM, often in the context of AI, without clearly explaining how these systems are used or how they influence outcomes.
The OAIC warned that this lack of clarity undermines public trust and falls short of the transparency standards expected when automated systems are involved in government decision-making.
“However, they did not specifically say whether they used ADM in any of their decision-making or recommendation processes,” the OAIC wrote.
“The regulator found nine agencies frequently implied the use of ADM — for example, through references in corporate plans or AI strategies — but said it could not “ascertain if this was indeed the case”.
Even among the four agencies that explicitly acknowledged using ADM, transparency gaps remained. The OAIC found those agencies were “not clear about how they used it”, while 74% of the agencies reviewed could not be definitively identified as using ADM at all.
The report lands amid the federal government’s National AI Plan, which promises greater legal consistency for automated decision-making as AI adoption accelerates, alongside the appointment of chief AI officers across agencies to drive implementation.
Are agencies hiding their hand?
Beyond Information Publication Scheme disclosures, the OAIC also reviewed agency websites and AI transparency statements to determine how accessible this information was to the public.
“Our threshold was whether a member of the public, who wanted to know if an agency was using ADM, could reasonably do so by performing relatively simple searches on the agency’s website,” the report said.
Applying that test, the commissioner concluded it was “likely” that ADM was in use at two agencies despite no clear public disclosure.
In one anonymised case study, an agency stated in a data strategy report that it was “embracing automation and artificial intelligence” to make faster, data-driven decisions — yet failed to explain on its website whether ADM was used, or how it influenced outcomes.
“It does not elaborate on how these decisions are made, and whether any decisions made by the agency are based solely on automated processes,” the OAIC said.
We don’t need another Robodebt
The report arrives against the backdrop of the federal government’s Robodebt scandal, in which an automated debt recovery system wrongfully accused welfare recipients of owing money to the Commonwealth.
Following years of fallout, the government has committed to $587 million in compensation for victims. Against that history.
The Office of the Australian Information Commissioner said “public examples of failures of oversight of ADM” — including those exposed by the Robodebt Royal Commission — had “highlighted the need for transparency about the use of ADM by government”.
“The benefits of utilising ADM technology in government will only be realised if risks are appropriately mitigated and trust is built with the Australian community,” the OAIC wrote.
The commissioner noted that Robodebt “relied heavily” on ADM for its income-averaging process. Separately, Information Age reported last year that staff at Centrelink and Medicare operator Services Australia had tested AI systems designed to predict fraudulent welfare claims.
Services Australia has since outlined a three-year strategy aimed at ensuring its use of AI and automation is “human-centric, safe, responsible, transparent, fair, ethical, and legal”.
Commissioner calls for transparency
The OAIC ultimately recommended that all agencies authorised to use ADM clearly disclose that use in their Information Publication Scheme statements, and specify whether they “utilise ADM to provide information and services to the public”.
Further recommendations included requiring agencies to clearly state the types of ADM technologies they use — ranging from “simple calculators to machine learning” — and to publish lists of decisions supported by ADM, supported by plain-language examples.
As a result of the findings, the OAIC said it will update Freedom of Information guidelines to explicitly include ADM as a form of ‘operational information’, which agencies are required to publish.
“Information about decision-making and the exercise of agencies functions is important information for the Australian community,” said information commissioner Elizabeth Tydd.
