The Australian Federal Police’s trial for the controversial Clearview AI facial recognition database was effectively “hidden IT”, officially unapproved, and conducted without a formal privacy assessment.
An investigation by the Office of the Australian Information Commissioner [pdf] found 10 “members” of the Australian Centre to Counter Child Exploitation (ACCCE) registered for trial accounts after learning of the tool’s existence from other authorities.
They then uploaded a range of images – some publicly available, some “derived from images distributed using underground marketplaces”, and some of ACCCE members – to Clearview AI.
No records of access to Clearview AI, or of “many” of the files uploaded to the service, were kept.
The OAIC said that “outside of the ACCCE operational command, there was no visibility of this limited trial” of the tool.
As a result, AFP media spokespersons initially denied the use of the tool, but were then flagged internally as incorrect.
Even within the ACCCE, not everyone seems to have been aware of the process; Following the media reports, the OAIC states that “the ACCCE mission coordinator has sent an email requesting information about ACCCE’s use of the facial recognition tool”.
The email sought “details of who had approved the use of the software, and what validation process was followed to ensure information security,” the OAIC said.
“The email states: ‘For clarity there should be no software used without the appropriate clearance for use’.”
The AFP said it had since tightened governance around “the use of free trials in the online environment” and “appointed a dedicated position within the ACCCE, who is responsible for undertaking software evaluations of similar kinds of applications in future.”
The force said the ACCCE members had also weighed privacy impacts of using Clearview AI “through other risk assessment mechanisms” instead of a privacy impact assessment (PIA).
“The trial participants considered that the risks were manageable in the context of the ‘limited trial’, and were outweighed by the need to share intelligence and information to best identify offenders and remove children from harm, and to respond to such matters in a timely manner,” the OAIC said in its report.
The OAIC did not accept this argument, however, and said that a PIA should have been undertaken.
In addition, Australian Information Commissioner and Privacy Commissioner Angelene Falk also said she “cannot be satisfied” that the steps the AFP has taken since the trial would prevent a recurrence.
As a result, she said the AFP would be subjected to an independent review of the changes it has made.
Clearview AI was found last month to have breached Australian privacy rules in the way the service operated.