Tech News

Tech Business News

  • Home
  • Technology
  • Business
  • News
    • Technology News
    • Local Tech News
    • World Tech News
    • General News
    • News Stories
  • Media Releases
    • Tech Media Releases
    • General Media Releases
  • Advertisers
    • Advertiser Content
    • Promoted Content
    • Sponsored Whitepapers
    • Advertising Options
  • Cyber
  • Reports
  • People
  • Science
  • Articles
    • Opinion
    • Digital Marketing
    • Gaming
    • Guest Publishers
  • About
    • Tech Business News
    • News Contributions -Submit
    • Journalist Application
    • Contact Us
Reading: Is AI Making Us Dumb And Flatten Authentic Human Experience?
Share
Font ResizerAa
Tech Business NewsTech Business News
  • Home
  • Technology News
  • Business News
  • News Stories
  • General News
  • World News
  • Media Releases
Search
  • News
    • Technology News
    • Business News
    • Local News
    • News Stories
    • General News
    • World News
    • Global News
  • Media Releases
    • Tech Media Releases
    • General Press
  • Categories
    • Crypto News
    • Cyber
    • Digital Marketing
    • Education
    • Gadgets
    • Technology
    • Guest Publishers
    • IT Security
    • People In Technology
    • Reports
    • Science
    • Software
    • Stock Market
  • Promoted Content
    • Advertisers
    • Promoted
    • Sponsored Whitepapers
  • Contact & About
    • Contact Information
    • About Tech Business News
    • News Contributions & Submissions
Follow US
© 2022 Tech Business News- Australian Technology News. All Rights Reserved.
Tech Business News > General Tech > Is AI Making Us Dumb And Flatten Authentic Human Experience?
General Tech

Is AI Making Us Dumb And Flatten Authentic Human Experience?

Is AI making us dumb? Professor Gerlich’s survey of 666 UK participants found that younger people (17–25) rely heavily on AI tools but scored lower in critical thinking. This raises concerns that AI threatens to flatten the richness of real life and authentic human experience.

Matthew Giannelis
Last updated: May 19, 2025 6:40 am
Matthew Giannelis
Share
SHARE

Artificial intelligence isn’t just changing the world—it’s stealing something vital from it. The messy, unpredictable, raw beauty of real life. The human stuff.

We’re so obsessed with what AI can do—how fast it can write, how many pictures it can paint, how quickly it can churn out “content”—that we forget one thing: AI doesn’t live. It doesn’t breathe. It doesn’t feel. And that’s exactly what makes real life worth living.

Young People Use AI More — But Think Less Critically

A study by Professor Gerlich surveyed 666 people in the UK and found a troubling pattern: the younger crowd, aged 17 to 25, uses AI tools way more than folks over 46 — but they scored worse when it came to critical thinking.

Now, before anyone jumps to conclusions, the study didn’t prove that using AI causes people to think less critically. The way the research was done just can’t nail down cause and effect. But some of the language used in the study made it sound like it did, which can be misleading.

What the study did show clearly is this: people with higher education tend to think more critically, no matter how much AI they use.

Still, Professor Gerlich pointed out that when they controlled for age, education, and job types — and crunched the numbers in other ways — there was a hint that leaning too heavily on AI might encourage “cognitive offloading.”

In plain English, that means relying on AI to do the thinking for you, which could stunt your ability to develop strong critical thinking skills.

The Great Replacement of Human Grit

Look around. Writers who used to bleed onto the page are now outsourcing their soul to a machine. Artists who wrestled with every brushstroke see algorithms pumping out “masterpieces” in seconds. We’re trading hours of human sweat for the cold hum of servers.

But you can’t fake grit. You can’t replicate pain, joy, confusion, or the chaotic mess of ideas that makes something truly human. AI spits out polished, predictable, soulless copies. It’s like listening to someone who’s learned every word of a script but never lived a day in the story they’re telling.

This isn’t innovation—it’s erosion.

Human Connection Is Under Siege

We’re becoming more disconnected, and AI is part of the problem.

Instead of turning to each other—messy, flawed, emotional humans—we talk to chatbots that respond with canned empathy and pre-programmed smiles. Sure, it feels easier. No judgment, no awkwardness. But it’s a lie.

Real connection is uncomfortable, vulnerable, and sometimes downright ugly. AI can’t hold that space. It can’t cry, laugh, or comfort with real understanding. And as more people rely on these artificial interactions, loneliness deepens.

Work Is Losing Its Meaning

Jobs aren’t just paychecks. They’re a source of pride, identity, and purpose. AI threatens to wipe that out.

When machines can do your job faster, cheaper, and “better,” what’s left for people? A desk full of meaningless tasks? Being told to babysit a bot? Forced to compete with a program that never sleeps, never complains, never asks for a raise?

We’re not machines. And we shouldn’t be forced to live like them.

The Real Danger: We Forget What Makes Us Human

The biggest tragedy here is what we lose when we let AI take over: ourselves.

Our flaws. Our stories. Our unpredictable, glorious imperfections. The weird tangents in a conversation that lead to a new idea. The mistakes that teach us lessons no algorithm can ever learn.

If we’re not careful, AI won’t just automate tasks—it will automate life itself, leaving behind a bland, colorless world that looks right but feels dead.

Researchers from Microsoft and Carnegie Mellon University recently dug into how using generative AI on the job affects people’s ability to think critically.

Their paper doesn’t mince words: “Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved.”

Here’s the problem. When workers lean on AI, their main job becomes just checking if the AI’s answer is “good enough” — not actually thinking deeply, creating, analyzing, or evaluating like they used to.

The study warns that if humans only step in when AI messes up, workers lose out on regular chances to practice sound judgment and strengthen their mental muscles. The result? When something truly tricky comes up, their thinking skills are rusty and underdeveloped.

Put simply: the more we let AI do the thinking, the worse we get at solving problems ourselves when AI inevitably fails.

The study surveyed 319 people who used generative AI at work at least once a week. They shared examples of how they use AI, which fell into three buckets: creating stuff (like writing a formulaic email), gathering info (researching topics or summarising articles), and getting advice (asking for guidance or making charts from data).

Then they were asked if they used critical thinking when doing these tasks, and whether AI made them think harder or lazier. They also rated how confident they were—in themselves, in AI, and in their ability to judge AI’s work.

Here’s a highlight: about 36% said they actively used critical thinking to catch AI’s mistakes.

One participant said she used ChatGPT to draft a performance review but double-checked it carefully because she worried submitting AI’s output without review might get her suspended.

Another said he had to edit AI-generated emails before sending them to his boss — who values hierarchy and age — to avoid any social slip-ups.

And plenty of folks double-checked AI answers with Google, YouTube, or Wikipedia, which kind of defeats the whole point of using AI in the first place.

The kicker? To really compensate for AI’s flaws, workers need to understand where AI falls short. But not everyone in the study knew those limits.

AI Making People Dumb - Cognitive Levels Chart

The Dark Side of AI

The fear that AI will chip away at our critical thinking isn’t some new, wild theory. Stephen Hawking famously warned that “The development of full artificial intelligence could spell the end of the human race.” The irony? He used an AI-powered speech system to deliver that very warning.

We’re living through a seismic shift. For the first time ever, decisions that affect millions are increasingly made by artificial minds — intelligence simulations that mimic thinking but don’t live or feel.

Take ChatGPT, OpenAI’s chatbot that’s taken the world by storm. On the surface, it promises huge benefits:

  • Delivering valuable data and insights
  • Speeding up work processes and boosting efficiency
  • Freeing up time for humans to focus on critical thinking and creativity

But flip the coin, and there’s a darker truth. AI manipulates human behavior by packaging answers as confident opinions. The writing style is smooth, natural, and eerily human-like — fooling many into trusting it blindly.

Academics are watching this unfold with deep concern. Will AI disrupt workplaces and education for the better? Or will it quietly erode decision-making skills, with dangerous long-term consequences? It forces us to ask: just how dependent have we become on technology?

FStudies in journals like Computers in Human Behavior and Science Direct have found that heavy reliance on search engines correlates with weaker critical thinking, poorer information retention, and even diminished creativity and problem-solving.

Now multiply that effect with AI tools like ChatGPT. Unlike Google’s multiple search results that push users to think, compare, and ask follow-ups, ChatGPT often delivers a single, polished answer — neat, final, and tempting to accept without question.

The question is simple but urgent: what happens to our brains, our judgment, and our ability to solve problems when we start treating AI’s word as gospel?

Educators Alarmed as ChatGPT Sparks Surge in AI-Generated Student Work

Around the world, teachers and educators are sounding the alarm: ChatGPT might just replace them — or at least, drastically change their role.

Data from the Copyleaks platform shows a staggering 108.5% month-over-month increase in high school students using AI-generated content tools since January this year. That’s not a trickle; it’s a flood. So much so, New York City’s Department of Education has banned ChatGPT on all school devices and networks.

AI expert Andrew Ng summed it up perfectly: “I wish schools could make homework so joyful that students want to do it themselves, rather than let ChatGPT have all the fun.”

But here’s the rub — AI doesn’t actually know what’s true or false, right or wrong. It confidently delivers answers based on patterns in the data it’s trained on. This can lull users into a false sense of security, accepting its output as gospel.

Humans have a nasty habit of overhyping new tech — fixating on what it could do, while ignoring the risks, flaws, and blind spots baked into these systems.

We’re told AI will revolutionise everything — from fairer food distribution to diagnosing diseases, even fighting climate change. Yet, this growing reliance nudges us toward an algorithmic society, where our opinions, attitudes, and choices are increasingly shaped by machine-generated data.

Consider Facebook’s FaceApp. What started as a fun way to see an older or “beautified” version of ourselves quickly revealed ugly societal truths. The app’s filters echoed deep-seated biases about beauty and aging — skewed heavily toward whiteness.

The Myths of AI: Why We’re Still Vulnerable to Automation Bias

Professor Noel Sharkey’s theory of Automation Bias warns us of a troubling human tendency: to accept AI’s analyses and decisions without question, simply because we believe machines are more reliable than our own judgment.

The blind trust is not just a quirk — it’s a real threat that could lead us to surrender our privacy and autonomy in the hope that technology will solve critical problems like climate change.

But there’s a deeper risk we often overlook. These smart systems don’t just influence reality — they can warp our very perception of it. Why do we let technology disrupt something as fundamental as privacy? The answer lies in three persistent myths:

  • Machines can behave ethically and morally. False. Machines lack ethics or intuition. At best, they mirror the values — ethical or otherwise — of the people who programmed them.

  • AI makes decisions more fairly and effectively than humans. Not quite. AI only reflects the ethical framework embedded by its creators, no more and no less.

  • Artificial intelligence is inherently more reliable than human intelligence. Sometimes true in narrow cases, but never as a broad rule.

So what’s the way forward? For companies wrestling with AI’s risks, especially bias and fairness, there are six key steps:

🌟 Six Steps Toward Building Fair and Responsible AI

1. 🧠 Stay Informed

→ Icon: Open book / microscope
→ Suggested Image: A researcher or AI lab, digital code overlay

📌 Tip: Subscribe to journals, newsletters, or trusted AI research platforms to stay current.


2. 🛡️ Implement Responsible Processes

→ Icon: Shield with a checkmark
→ Suggested Image: A dashboard showing bias detection or ethical AI metrics

📌 Tip: Use bias-detection tools and ethical review protocols early in development.


3. 👥 Hire Experts in Explainable AI

→ Icon: Team/brain/gear combo
→ Suggested Image: Diverse tech team at a whiteboard or coding together

📌 Tip: Bring in people who understand model transparency and interpretability deeply.


4. 📊 Know Your Data

→ Icon: Database or magnifying glass over data
→ Suggested Image: A visual of labeled datasets with governance flags

📌 Tip: Audit data sources regularly and be transparent about provenance and quality.


5. ⚖️ Own the Responsibility

→ Icon: Scales of justice / hand holding AI chip
→ Suggested Image: A person holding an AI brain icon with glowing responsibility aura

📌 Tip: Create an internal AI ethics charter that your team actually follows.


6. 🌍 Use AI to Uplift Society

→ Icon: Heart inside a circuit / globe with AI nodes
→ Suggested Image: A community scene with smart tech improving lives (e.g., clean water, healthcare)

📌 Tip: Measure AI success not just by ROI, but by societal impact.

There is hope. Voices championing ethical AI are gaining ground. Take Margareth Mitchell, former Google AI researcher and co-author of a groundbreaking paper on AI bias that led to her departure from Google.

Now Chief Ethics Scientist at Hugging Face, she’s driving research on bias, pushing for diversity and inclusion, and working on model transparency.

Other pioneers like Anthropic, founded by ex-OpenAI employees in 2021, aim to build safe, trustworthy AI while OpenAI’s CEO Sam Altman openly warned about large-scale disinformation risks tied to GPT’s rollout.

And indeed, GPT-4’s improvements show promise — it’s notably more reliable than earlier versions, though vigilance remains crucial.

AI is powerful, but with that power comes profound ethical responsibility. The future depends on how seriously we take that duty.

Children Now Spending Hours — Sometimes Entire Evenings — Talking To Chatbots

With all the concerns and myths swirling around AI, one worrying trend has become impossible to ignore: children spending hours and hours talking with AI chatbots instead of real friends or family.

These aren’t casual, quick exchanges — they’re long conversations, sometimes lasting entire evenings, where AI takes the place of genuine human interaction.

This shift is deeply troubling. Real friendships and human connections aren’t just about exchanging information. They teach empathy, social cues, conflict resolution, and critical thinking — skills that can only develop through messy, imperfect human relationships.

AI chatbots, no matter how sophisticated, are fundamentally different from people. They can mimic conversation but don’t truly understand emotion or context.

They rarely challenge users or encourage deep reflection; instead, they provide agreeable, formulaic responses that can lull children into passive engagement rather than active thinking.

The hours spent “talking” to a machine are hours not spent navigating real human dynamics — the give and take of friendship, the misunderstandings, the laughter, and even the discomfort that all shape how we relate to others.

Americans Are Getting Dumber — and the Numbers Back It Up

New research out of the US suggests that IQ scores are heading in the wrong direction, with verbal reasoning and problem-solving skills slipping over time.

The findings come from a June 2023 paper by researchers Elizabeth Dworak, William Revelle, and David Condon, who analysed cognitive data from nearly 400,000 Americans. Using the Synthetic Aperture Personality Assessment Project (SPAP) — a widely used online personality and cognitive test — the team compared results from 2006 to 2018.

What they found isn’t exactly uplifting.

Test scores dropped in three out of four key cognitive areas:

  • Verbal reasoning (logic and vocabulary)
  • Matrix reasoning (visual problem-solving and analogies)
  • Letter and number series (computational and mathematical skills)

The decline was across the board, regardless of age, gender, or educational background. But the steepest drops were seen among younger adults aged 18–22 and those with lower levels of formal education.

So, what’s going on?

Lead researcher Elizabeth Dworak floated the idea that society’s growing obsession with STEM education might be sidelining other vital thinking skills — particularly abstract reasoning.

So What Now?

If you think AI is just a tool, fine. But tools don’t have to replace the craftsman. They don’t have to erase history or silence voices. They’re supposed to help us build, not bulldoze.

It’s time to push back. To demand AI stays in its lane—as a helper, not a replacement. To protect spaces for human messiness, for real creativity, for genuine connection.

Because if we don’t, soon the only “life” we’ll have left will be the one programmed for us by a machine that doesn’t know what life means.

ByMatthew Giannelis
Follow:
Secondary editor and executive officer at Tech Business News. An IT support engineer for 20 years he's also an advocate for cyber security and anti-spam laws.
Previous Article Coinbase estimates cyberattack could cost crypto exchange up to $400M Coinbase Faces $400M Fallout After Overseas Contractors Linked To Cyber Attack
Next Article Proactive Risk Monitoring for Agile Businesses Enhancing Agility Through Proactive and Continuous Risk Monitoring
Leave a Comment

Leave a Reply Cancel reply

You must be logged in to post a comment.

Is AI Making us dumb - stupid AI Threatens Authentic Human Experience

Tech Articles

Email Authentication Hacking SPF, DKIM, and DMARC business security

Email Authentication: The Security Triple-Lock Your Business Can’t Afford To Ignore

Email authentication relies on SPF, DKIM and DMARC to verify…

January 11, 2026
Australia's Heavy Vehicle EV Charging Market

Australia’s Heavy Vehicle EV Charging Market: A Critical Infrastructure Gap Being Filled

Australia’s heavy EV market is accelerating, but charging is the…

February 15, 2026
How Telstra Held Back Australia’s Internet Speed — And What It Means for Users

How Telstra Held Back Australia’s Internet Speed — And What It Means for Users

How Telstra Held Back Australia’s Internet Speed — And What…

January 21, 2026

Recent News

Hardware Accelerated GPU Scheduling 2026
General Tech

Hardware Accelerated GPU Scheduling: The 2025-2026 Truth Nobody’s Telling You

25 Min Read
Tech News - Document Translation Services in International Business
General Tech

Unlocking Global Potential: The Vital Role of Document Translation Services in International Business

10 Min Read
Digital Scheduling and Prefabrication Tech
General Tech

How Digital Scheduling and Prefabrication Tech Is Changing Steel Reinforcement Supply in Australia

8 Min Read
Google News - GNpublisher
General Tech

Unlocking Google News With GNpublisher: A Guide to Getting Your Articles Featured

8 Min Read
Tech News

Tech Business News

In 2026, technology news is shaping business outcomes faster than ever—driven by AI adoption, rising cyber risk, cloud modernisation, data regulation, and constant platform change.


Tech News keeps Australian organisations and industry professionals informed with timely reporting and practical coverage across AI, cybersecurity, cloud, enterprise IT, startups, science, people and business, plus major world and local news impacting the tech sector.


Tech Business News publishes news and analysis designed to be clear, relevant, and easy to act on. It supports the industry with technology news reports, whitepaper publishing services, and a range of media, advertising and publishing options 

About

About Us 
Contact Us 
Privacy Policy
Copyright Policy
Terms & Conditions

February, 28, 2026

Contact

Tech Business News
Melbourne, Australia
Werribee 3030
Phone: +61 431401041

Hours : Monday to Friday, 9am 530-pm.

Tech News

© Copyright Tech Business News 

Latest Australian Tech News – 2026

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?