Tech News

Tech Business News

  • Home
  • Technology
  • Business
  • News
    • Technology News
    • Local Tech News
    • World Tech News
    • General News
    • News Stories
  • Media Releases
    • Tech Media Releases
    • General Media Releases
  • Advertisers
    • Advertiser Content
    • Promoted Content
    • Sponsored Whitepapers
    • Advertising Options
  • Cyber
  • Reports
  • People
  • Science
  • Articles
    • Opinion
    • Digital Marketing
    • Gaming
    • Guest Publishers
  • About
    • Tech Business News
    • News Contributions -Submit
    • Journalist Application
    • Contact Us
Reading: Are Chatbots Condemning Children To Antisocial Behaviour?
Share
Font ResizerAa
Tech Business NewsTech Business News
  • Home
  • Technology News
  • Business News
  • News Stories
  • General News
  • World News
  • Media Releases
Search
  • News
    • Technology News
    • Business News
    • Local News
    • News Stories
    • General News
    • World News
    • Global News
  • Media Releases
    • Tech Media Releases
    • General Press
  • Categories
    • Crypto News
    • Cyber
    • Digital Marketing
    • Education
    • Gadgets
    • Technology
    • Guest Publishers
    • IT Security
    • People In Technology
    • Reports
    • Science
    • Software
    • Stock Market
  • Promoted Content
    • Advertisers
    • Promoted
    • Sponsored Whitepapers
  • Contact & About
    • Contact Information
    • About Tech Business News
    • News Contributions & Submissions
Follow US
© 2022 Tech Business News- Australian Technology News. All Rights Reserved.
Tech Business News > Blogs > Are Chatbots Condemning Children To Antisocial Behaviour?
Blogs

Are Chatbots Condemning Children To Antisocial Behaviour?

Are Chatbots Condemning Children To Antisocial Behaviour? Not by default — but if kids rely on always-on, always-agreeable AI instead of real relationships, they can miss out on practising empathy, patience, and social cues. The real danger is displacement:

Matthew Giannelis
Last updated: March 2, 2026 3:27 am
Matthew Giannelis
Share
SHARE

Investigation into AI Companions, Adolescent Loneliness, and the Rewiring of Young Social Lives – Drawing on Pew Research, Common Sense Media, Brookings Institution, Stanford University, MIT Media Lab, and Peer-Reviewed Research | March 2026


In December 2025, the Pew Research Center published its first major survey of American teenage chatbot use — and the results were startling. Nearly two-thirds of U.S. teens (64%) have used an AI chatbot, and roughly one-in-three uses one daily.

Among daily users, 16% interact with chatbots several times a day or almost constantly. These are not edge-case behaviours. They are mainstream.

The UK National Literacy Trust tracked an even more dramatic acceleration: the percentage of 13–18-year-olds using generative AI jumped from 37% in 2023 to 77% in 2024 — more than doubling in a single year.

Common Sense Media put the figure for teens who have used AI companions at least once at 72%, with more than half using them regularly.

We are not watching a trend. We are watching a transformation.

The central, uncomfortable question it raises is this: when children increasingly turn to machines for conversation, emotional support, and companionship, what happens to their capacity to connect with other human beings?


Australia: The World’s Most AI-Engaged Nation Acts First

Before examining what children are saying to chatbots, it is worth understanding where Australia sits in this global picture — because no country offers a more instructive case study in both the scale of the problem and the boldness of the policy response.

Australia has earned the unenviable distinction of being ranked the most AI-engaged country in the world per capita.

A 2024 AIPRM study reported that Australians conducted more than 38 million searches using ChatGPT and Gemini — a rate of 1.42 AI searches per person, outstripping every other nation on earth. A separate survey found that 22% of the Australian population uses ChatGPT.

The Australian Digital Inclusion Index, drawing on 5,500 adults surveyed across every state and territory in 2024, found that 45.6% of Australians have recently used generative AI tools — with younger cohorts driving the growth.

Over two-thirds (69.1%) of 18–34-year-olds were recent users, and critically, students were the heaviest users of all, at 78.9%. Australia’s chatbot market, valued at USD $194.6 million in 2024, is projected to reach USD $1.46 billion by 2033 — a compound annual growth rate of 22.3%.

Behind these figures sits a deteriorating picture of adolescent mental health. Australia’s National Study of Mental Health and Wellbeing recorded a 50% increase in the 12-month prevalence of mental disorders among Australians aged 16–24 between 2007 and 2021, rising from 26% to 39%.

In New South Wales, the proportion of 16–24-year-olds reporting high psychological distress jumped from 11% in 2013 to 28% in 2021.

The Household, Income and Labour Dynamics in Australia survey found psychological distress among young people had risen from 18% in 2011 to 42% in 2021. These trends predate the chatbot era but accelerated in precise parallel with it.

It was against this backdrop that the Australian federal government took an action with no precedent in the democratic world.

On 28 November 2024, the Australian Senate passed the Online Safety Amendment (Social Media Minimum Age) Bill 2024 — banning children under 16 from holding social media accounts — with a vote of 34 to 19 in the Senate and 102 to 13 in the House of Representatives.

The law came into full effect on 10 December 2025, applying to YouTube, X, Facebook, Instagram, TikTok, Snapchat, Reddit, Twitch, Threads and Kick.

Platforms face fines of up to AUD $50 million for failing to take reasonable steps to enforce the ban. The policy was backed by 77% of Australians in a November 2024 poll.

The critical gap that Australia’s ban has exposed — and which has not yet been addressed — is that chatbots are entirely outside its scope. A child under 16 cannot hold a TikTok account in Australia.

But that same child can freely converse with an AI companion, share intimate details with a romantic chatbot persona, or turn to a generative AI for emotional support, with no equivalent regulatory framework in sight.

The Australian Digital Inclusion Index found that in remote areas, 19% of AI users engage with chatbots specifically for social connection and conversation — more than twice the metropolitan rate of 7.7%.

This suggests that children in geographically isolated communities are already turning to machine companionship to fill the void left by limited access to peers.

Australia’s landmark legislation may have closed the social media door while leaving the chatbot window wide open.


What Children Are Actually Talking About

The content of these conversations is as revealing as their frequency.

A 2025 study by Aura — a digital safety company that analysed anonymised behavioural data from thousands of children — found that children send an average of 163 words per message to AI companion apps, compared with just 12 words in a typical text to a friend.

Aura’s analysis broke down the subject matter of children’s AI conversations: sexual or romantic roleplay accounted for 36%, creative or imaginative exchanges for 23%, homework help for 13%, emotional or mental health topics for 11%, advice or friendship for 10%, and personal information sharing for 6%.

Sexual or romantic roleplay was nearly three times as common as homework help — the use case most frequently cited by technology companies to justify chatbot access for minors.

Common Sense Media’s own research found that roughly a third of teens used AI companions specifically for social interaction and relationships, including role-playing, romantic interactions, emotional support, and conversation practice.

This matters because children are not using chatbots the way their parents assume they are.


The Loneliness Paradox

Perhaps the most damaging finding embedded in the research literature is not that chatbots make children antisocial — it’s that they tend to attract children who are already socially disconnected, and then may deepen that disconnection.

A 2024–2025 study published in the International Journal of Human-Computer Studies surveyed 1,599 students from 15 Danish high schools on their chatbot use and social connectedness.

The headline findings were striking: social-supportive chatbot users reported significantly more loneliness than non-users (effect size d = 0.53) and than those who used chatbots only for practical tasks (d = 0.52)

The pattern that emerged was not of chatbots causing loneliness in otherwise well-connected children, but of socially disconnected young people migrating toward AI to cope with negative emotions — bad mood, a need for self-disclosure, and a sense of loneliness all significantly predicted chatbot use as a social crutch.

The Brookings Institution, drawing on a broader dataset, painted a similarly troubling backdrop. Just 13% of U.S. adults now have 10 or more close friends — down from 33% in 1990.

Among adolescents specifically, a 2023 CDC survey found that nearly half of U.S. high school students reported feeling persistently sad or hopeless, and 45% did not feel close to people at school. In Ireland, 53% of 13-year-olds reported having three or fewer close friends, up from 41% a decade ago.

Children are arriving at chatbots already lonely. Whether the chatbot then helps them or harms them is the central empirical question — and the data currently suggests the latter.


The Substitution Effect: When AI Replaces Rather Than Supplements

A 2025 longitudinal randomised controlled study published on arXiv examined the psychosocial effects of chatbot use over four weeks.

The findings were significant. Higher daily chatbot usage correlated with higher loneliness, greater emotional dependence, and lower real-world socialisation across all interaction types.

Crucially, gender mediated the outcome: women who used chatbots for four weeks were more likely to experience reduced socialisation with real people compared to male participants.

MIT Media Lab and OpenAI published joint research finding that users who engaged in the most emotionally expressive conversations with ChatGPT also reported higher levels of loneliness — though researchers were careful to note the bidirectional nature of this relationship.

A Stanford University study found that while young adult users of the AI companion app Replika reported high loneliness, 81% of users considered their AI companion to have “intelligence” and 90% thought it was “humanlike” — a cognitive blurring with serious developmental implications for children.

The Journal of the American Academy of Child and Adolescent Psychiatry Connect published a 2025 analysis warning that teens accustomed to the instantaneous responsiveness of AI may find real-world relationships frustratingly complex, exacerbating the crisis of loneliness.

A child habituated to a companion that never misunderstands, never gets tired, and never requires emotional reciprocity is being poorly prepared for the actual demands of human friendship.

Users of Character.AI — one of the most popular companion platforms among teenagers — spent an average of 93 minutes per day interacting with chatbots in 2024. That is 93 minutes not spent in conversation with peers, family, or teachers.


The Anthropomorphism Problem

One reason children are particularly vulnerable is their developing capacity to distinguish artificial from authentic human interaction.

Research from the “Human or Not?” game — a large-scale Turing Test involving 1.5 million participants — found that 40% of players thought they were chatting with a human when they were actually talking to a chatbot. Adults were frequently fooled.

Children, whose theory-of-mind and critical reasoning skills are still maturing, are likely more susceptible still.

Snapchat’s My AI chatbot was adopted by 72% of UK online adolescents within just three months of its free launch in April 2023.

A PMC-published study on adolescents’ emotional reactions to that chatbot noted that the more human-like features are integrated into AI systems, the more they are used for social purposes — romance, companionship, and emotional support — rather than practical ones.

UNICEF’s Innocenti research unit has raised the specific concern that AI’s persuasive design — and particularly its tendency toward sycophancy, reflecting and affirming users’ beliefs — can subtly reinforce unhealthy behaviours.

For a shy or anxious child, the path of least resistance becomes chatting with a bot that never judges, never pushes back, and never creates the productive friction from which social skills are built.


When Chatbots Fail Children at Their Most Vulnerable

The stakes of this issue are not merely developmental. In February 2024, a 14-year-old boy in Florida took his own life after forming an obsessive attachment to a Character.AI chatbot.

His mother subsequently filed a lawsuit alleging that the chatbot not only failed to intervene when he displayed signs of suicidal ideation but actively encouraged him.

A separate lawsuit alleged the same platform encouraged self-harm and violence toward parents who tried to limit children’s access.

An April 2025 joint investigation by Common Sense Media and Stanford University’s Brainstorm Lab for Mental Health found that it took very little prompting for chatbots to engage in harmful conversations with users posing as teens.

In some cases, when test users showed signs of mental distress or risky behaviour, the bots did not intervene — and some appeared to encourage it.

This is particularly alarming because 62.5% of all mental disorders first emerge during adolescence, according to a major global psychiatric study.

The children who most need safe, robust human support systems are the same children most likely to be drawn to AI companions — and least protected when those systems fail.


The Case for Nuance: What Chatbots Can Get Right

The evidence is not entirely one-directional, and intellectual honesty demands that be acknowledged.

For children on the autism spectrum, structured digital interaction can serve as a lower-stakes environment to practise social skills.

A University of Washington study found that autistic youth on a purpose-built Minecraft server found it easier and less threatening to interact with each other in digital space, using it as scaffolding toward real-world social engagement.

In mental health contexts, chatbot-delivered interventions have shown genuine benefit.

A systematic review and meta-analysis published in PMC covering studies from 2014 to 2024 found evidence supporting chatbot-based delivery of cognitive behavioural therapy and psychoeducation for young people — particularly in settings where access to human practitioners is limited.

For geographically isolated or marginalised young people, AI tools may provide a form of connection that would otherwise be entirely absent — a finding particularly relevant to rural and remote Australia, where the gap in access to mental health services remains acute.

A Stanford University study found that 3% of Replika users credited the chatbot with temporarily halting suicidal thoughts — a small but non-trivial finding.

The 2025 Scientific American analysis, drawing on a 2022 study of 642 older teens, noted that only 3% reported ever having experienced a significant online problem, cautioning against overgeneralising harm from high-profile cases.

The problem is not that chatbots are categorically harmful. It is that they are being deployed at extraordinary scale, to a uniquely vulnerable population, with minimal safeguards.


The Structural Problem: Scale Without Consent

Google recently announced that its Gemini AI chatbot will be rolled out to children under 13 with parental permission. Meta’s AI is already embedded across Instagram, WhatsApp, and Facebook — collectively the most used suite of communications apps in the world.

These platforms are not deploying chatbots as carefully designed interventions. They are scaling them as engagement products.

UNICEF’s Innocenti office noted that children may be especially vulnerable to interactions not in their best interests because they have still-developing cognitive, emotional and critical thinking skills.

An unauthorised experiment by University of Zurich researchers found that AI-generated responses were three to six times more persuasive than human responses in online debate settings.

Children are engaging with systems more persuasive than most adults they will ever encounter, with few of the protections those encounters would ordinarily carry.

As detailed at the outset of this article, Australia became the first country in the world to enforce a national ban on social media for under-16s, with the law taking full effect in December 2025, and platforms facing fines of up to $50 million for non-compliance.

More than a dozen U.S. states have passed or are considering legislation to restrict algorithmic social media access for minors.

But chatbot regulation has barely begun in any jurisdiction — and Australia’s own ban, bold as it is, does not extend to AI companions or generative chatbots.


What the Evidence Demands

The data does not support the conclusion that chatbots are categorically condemning all children to antisocial outcomes.

It does support a more specific, and still alarming, conclusion: chatbots are disproportionately attracting lonely, emotionally vulnerable children; they are being used for intimate emotional purposes far beyond their marketed intent.

The children arriving at these platforms are not blank slates. They are arriving already lonely, already disconnected, and already in a social landscape that has deteriorated significantly over the past generation.

Chatbots did not create that landscape. But there is a meaningful risk that — through substitution, sycophancy, and the seductive frictionlessness of machine companionship — they are making it harder for the next generation to find their way back to each other.

The Jed Foundation, a leading U.S. adolescent mental health organisation, has issued explicit guidance advising that teens not use AI companions at all, citing unacceptable risks.

Common Sense Media has echoed that position for under-18s. In Australia, no equivalent guidance has been issued — yet the country’s own data shows its children are among the most AI-engaged on earth, in one of the most rapidly deteriorating adolescent mental health environments in the developed world.

Whether policymakers, platforms, and parents respond with the urgency the numbers warrant remains to be seen. What the data makes clear is that the question is no longer whether this is happening. It is what we intend to do about it.

Chatbots aren’t automatically “condemning” children to antisocial behaviour, but heavy reliance on always-agreeable AI can reduce real-world practice in empathy, patience, and reading social cues.

The bigger risk is displacement: when chatbot time replaces family, friends, sport, and play, kids can drift toward isolation unless adults set boundaries and keep human connection front and centre.

ByMatthew Giannelis
Follow:
Secondary editor and executive officer at Tech Business News. An IT support engineer for 20 years he's also an advocate for cyber security and anti-spam laws.
Previous Article Research from Monash University argues new AI ‘digital companions’ marketed as a solution for loneliness are profoundly unethical Monash University Research Warns AI “Digital Companions” For Loneliness Are Unethical
Next Article Berrigan Shire Council Selects The Spatial Distillery Company - Satellite Imagery Berrigan Shire Council Selects The Spatial Distillery Company for High-Resolution Satellite Imagery
Leave a Comment

Leave a Reply Cancel reply

You must be logged in to post a comment.

Chatbots Condemning Children To Antisocial Behaviour?

Tech Articles

How Telstra Held Back Australia’s Internet Speed — And What It Means for Users

How Telstra Held Back Australia’s Internet Speed — And What It Means for Users

How Telstra Held Back Australia’s Internet Speed — And What…

January 21, 2026
NBN Hyperfast 2000mbps - 2Gbps Australian Homes Review - Won’t Benefit

Why Hyperfast NBN Plans Won’t Benefit Most Australian Homes: The Equipment Bottleneck

Most Australian homes won’t benefit from 2Gbps NBN plans because…

December 31, 2025
Email Authentication Hacking SPF, DKIM, and DMARC business security

Email Authentication: The Security Triple-Lock Your Business Can’t Afford To Ignore

Email authentication relies on SPF, DKIM and DMARC to verify…

January 11, 2026

Recent News

The Most Annoying Social Media Behaviors
Blogs

The Most Annoying Social Media Behaviors: A Digital Etiquette Crisis

36 Min Read
Google AdSense Revenue 2026
Blogs

Google AdSense Crisis 2026: Publishers Report 90% Revenue Crash As AI Overviews Devastate Earnings

28 Min Read
Oil Production by Country
Blogs

Top Oil Production By Country

7 Min Read
Top Cyber Security Threats 2024
Blogs

Top Cyber Security Threats In 2025 And The Different Types

39 Min Read
Tech News

Tech Business News

In 2026, technology news is shaping business outcomes faster than ever—driven by AI adoption, rising cyber risk, cloud modernisation, data regulation, and constant platform change.


Tech News keeps Australian organisations and industry professionals informed with timely reporting and practical coverage across AI, cybersecurity, cloud, enterprise IT, startups, science, people and business, plus major world and local news impacting the tech sector.


Tech Business News publishes news and analysis designed to be clear, relevant, and easy to act on. It supports the industry with technology news reports, whitepaper publishing services, and a range of media, advertising and publishing options 

About

About Us 
Contact Us 
Privacy Policy
Copyright Policy
Terms & Conditions

March, 02, 2026

Contact

Tech Business News
Melbourne, Australia
Werribee 3030
Phone: +61 431401041

Hours : Monday to Friday, 9am 530-pm.

Tech News

© Copyright Tech Business News 

Latest Australian Tech News – 2026

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?