Australia’s online safety regulator have been calling on schools to revamp their safety policies since 2023 after uncovering the first-ever case of students using AI-generated explicit material to bully their peers.
The first issue came to light during a federal investigation into artificial intelligence use, with committee members admitting they were struggling to keep pace with the fast-evolving AI landscape.
- Major concerns were flagged during a federal probe into the use of artificial intelligence, during which committee members admitted they were struggling to play “catch up” in a rapidly changing digital space.
- According to eSafety Executive Manager Paul Clark the issue of generative AI really accelerates the ability to manipulate voice and images, which increases the risk of cyber-bullying to students and to teachers.
Bullying has always been a problem in schools, but the rise of the internet and social media amplified its reach. Now, AI technology is taking it to another level.
With the accessibility of generative AI tools, students can create deepfake videos, fake social media accounts, and AI-generated messages designed to humiliate, manipulate, or intimidate their peers.
Teachers Are Out of Their Depth
The unfortunate reality is that most teachers are not equipped to understand, let alone combat, this new wave of harassment.
AI literacy in schools remains shockingly low, with many educators still struggling to integrate basic digital tools into their teaching. How can we expect them to identify AI-generated bullying tactics when they can barely keep up with the rapid advancements in the field?
Moreover, from what I can tell there are no clear policies in place to address AI-driven bullying. Teachers are expected to manage classroom discipline, but when it comes to malicious AI-generated content spreading across social media, they have little power or training to intervene.
The Psychological Toll on Students
Unlike traditional bullying, which often requires direct interaction, AI-driven harassment can be automated, relentless, and impossible to trace.
Victims may find themselves the subject of AI-generated memes, fake nude images, or even videos that depict them saying or doing things they never did. The psychological damage from such attacks is profound, leading to anxiety, depression, and even suicidal thoughts.
The worst part? Many students don’t even report these incidents because they assume nothing can be done. And in many cases, they’re right.
The Lack of a National Strategy
Australia doesen’t seem to have national strategy specifically addressing AI-driven bullying. While there are laws around cyberbullying and defamation, they are outdated and ill-equipped to handle the complexity of AI-generated abuse.
Schools, in particular, have been left to fend for themselves, with little to no uniform guidelines on how to detect, prevent, or address these issues.
Queensland’s Education Minister’s shocking revelation that at least one student each week is being told to “kill themselves” by a bully online is a glaring red flag.
It’s a wake-up call that screams for immediate action. And now, with AI-generated bullying making its way into the mix, experts are clear: we need to step up education on harm-prevention strategies, fast. This can’t be brushed off any longer.
What Needs to Change
- AI Education for Teachers – Educators must receive mandatory training on AI and its potential for misuse. Understanding deepfakes, chatbots, and AI-driven manipulation tactics should be part of their professional development.
- Updated Anti-Bullying Policies – Schools must update their cyberbullying policies to explicitly include AI-driven harassment. This should involve clear disciplinary actions for students who engage in such behavior.
- Stronger Tech Partnerships – Schools should work with cybersecurity experts, AI ethicists, and tech companies to develop tools that can detect and mitigate AI-driven bullying in real time.
- Legal Reforms – The Australian government must step up with legal frameworks that address AI-generated harm, ensuring accountability for those who use this technology maliciously.
Parents warned over rise in AI-generated child abuse material to ‘embarrass’ and ‘bully’ classmates
Parents are being urgently warned about the growing trend of AI-generated deepfakes being used by students to “embarrass or bully classmates.”
Thanks to rapid advancements in AI, it’s now easier than ever to create hyper-realistic, fake pornographic content—deepfakes—that can make it look like someone is doing something they never did.
The AFP has raised the alarm over the increase in AI being used to produce child abuse material (CAM), with a 48-year-old man from Victoria jailed last year for creating over 790 “realistic child abuse images” using AI.
The man was charged with one count of producing child abuse material and using a carriage service to transmit child abuse material before he was jailed for 13 months.
AFP Commander Helen Schneider said the ability to produce such a large amount of images and data was a real challenge for the AFP, as investigators were left to “analyse and painstakingly sort through a lot of images” in order to bring offenders before the court.
Meanwhile, a student from southwestern Sydney allegedly made deepfake pornography of female students using artificial intelligence and images sourced from social media.
A student from a school in Victoria’s northwest also allegedly created graphic nude images of about 50 girls from the school last June.
What Needs to Happen?
🚨 AI Detection Technology in Schools – Schools need AI detection tools to analyze and debunk deepfakes before they spread.
🚨 Stronger Cyberbullying Laws – Australia’s legal system needs to catch up. Harassment laws should explicitly cover AI-generated bullying.
🚨 Education for Teachers and Parents – If educators don’t even know what deepfake technology is, how can they fight it? Training is a must.
🚨 Accountability for AI Platforms – Tech companies offering deepfake and AI voice cloning tools need to implement safeguards, like watermarking AI content or banning misuse.
The Clock Is Ticking
If Australian schools continue to ignore this growing problem, AI-driven bullying will spiral out of control. Teachers, parents, and policymakers must wake up to the reality that bullying has evolved, and their outdated methods of prevention are no longer enough.
In 2023, almost 42% of high school students in Australia were using AI programs like ChatGPT during school hours, according to recent studies.
Sure, AI can bring “massive benefits” for young people, but let’s not sugarcoat it: the technology is also loaded with risks.
It’s potentially robbing students of their ability to think critically and, on top of that, feeding them harmful stereotypes through biased algorithms and the ability to launch attacks on other children.
It’s time for a proactive approach—before more students suffer the consequences of an education system stuck in the past.
