The technology, which can create hyper-realistic fake content, is now being exploited to produce pornographic material that falsely portrays individuals, including minors, in compromising scenarios.
Last year, a 48-year-old man in Victoria was jailed for 13 months after generating over 790 AI-created child abuse images. He was charged with producing and transmitting child abuse material using a carriage service.
According to AFP Commander Helen Schneider, the sheer volume and realism of such images present a significant challenge for investigators, who must sift through countless files to differentiate between AI fabrications and cases involving real victims.
“The quality of the AI-generated CAM was becoming increasingly realistic, and made it difficult for the AFP to ensure they weren’t investing resources into investigating images “where there is actually no real child at risk,” says Schneider
In schools, the problem is hitting even closer to home. Last June, a Victorian student allegedly created explicit AI-generated images of 50 female classmates.
In Sydney, a student reportedly used social media photos to generate deepfake pornography of peers, while in Melbourne, fake sexual images of a teacher were circulated among students.
Commander Schneider also pointed out the increasing accessibility of AI tools, making it easier for individuals to misuse the technology.
The “entry level to use this type of technology was decreasing” and made it “more accessible from a capability perspective”.
“AI technology is increasingly accessible and I think it’s more accessible because it’s really integrated into a lot of the platforms used by Australians everyday,”
The AFP is urging parents, teachers, and the community to remain vigilant. Anyone with information about child abuse activities is encouraged to contact the Australian Centre to Counter Child Exploitation (ACCCE), and immediate risks should be reported to triple-zero.

