Why We're Getting AI Fatigue
My thoughts on why AI news exhausts us—it's not the technology, it's how it's being used
It seems we are already at the crossroads of being tired of hearing all about AI. It had me thinking about the why? Why are we getting so tired of hearing all about AI, since it's considered a "newer" technology, one that has been marketed nearly to death already. And here is why I think that is.
The Relentless March of Job Displacement
We keep constantly hearing about AI being used to replace people's positions at companies such as Klarna, Google, Salesforce, Microsoft, and Duolingo (which saddened me the most as an avid user of the app which I cut after their announcement) with many citing that AI has the ability to automate various tasks that humans did (O'Sullivan, 2025).
The numbers tell a stark story. The unemployment rate within the first three months for recent graduates jumped to 5.8% because AI is increasingly doing tasks that used to be assigned to entry-level workers (Yang & Edic, 2025). Hell, I worked as a senior marketing leader for an AI health tech startup, and was laid off recently because I gather one of the reasons, they think they can do my job with AI.
And this is just the tip of the negative press AI has been receiving. There have been many lawsuits in the U.S. with many artists (and rightfully so), suing companies such as OpenAI and Anthropic for copyright infringement. Even other companies are suing AI companies for copyright infringement, such as Disney and Universal teaming up and suing Midjourney. With some of these lawsuits, the artists have won, while on others, the AI companies have won.
The AGI Hype Machine
But, in my opinion, I don't think it's the AI news we're sick of, but rather the negative news of how AI is being used that we are sick of. In the U.S. in particular, there are countless stories trying to be told in a positive light (really for the shareholders), of how great AI is and how it's helping driving costs down via AI replacing humans.
There's also constant news and excitement about how supposedly close we are to AGI (artificial general intelligence), which is theoretically supposed to match or even surpass human cognitive abilities. The tech industry's top executives are making increasingly bold claims about AGI's proximity. Sam Altman recently stated that OpenAI is "confident we know how to build AGI" and predicted it could arrive in 2025, while Anthropic's Dario Amodei thinks 2026 is possible, and Nvidia's Jensen Huang believes AGI could emerge "within 5 years" (Kamps, 2024; Kozlowski, 2025; Spodarets, 2025). These aren't cautious academic predictions, they're confident proclamations from executives whose companies are actively seeking massive investment.
But I think we are much further away from AGI than those that are stating its nearly here because, for example, AI hallucinations are still incredibly problematic (see my recent article on the MAHA report and AI Hallucinations) and AI can't handle human conversation nuances. In my opinion, I think they are just saying this to rile up investors and shareholders.
The Reality Check: AI Whiplash is Real
We're already seeing "AI whiplash", or the feeling of overwhelm and disorientation from the rapid pace of AI adoption (Bouthillier, 2025). The promises don't match reality. For example, Klarna claimed AI was doing the work of 700 employees at one point, but has since realized AI can't handle conversation nuances. The company discovered the AI delivered low-quality work while customers complained about inflexible, robotic responses (Haun, 2025). They have now since are hiring back real humans. And in a study by Reworked (2025), over 55% of organizations that executed AI-driven layoffs now regret it. The pattern is clear: overpromise, underdeliver, then quietly backtrack while the human cost mounts.
I could also go down all the other negatives about AI, such as the environmental impacts (Musk's xAI environmental impact on a historically black neighborhood in South Memphis or the surge on energy and water usage).
What We're Not Hearing is AI for Good
But what we hear little of is the impact for good AI is being used for. All the news about AI in the end is to benefit the pockets of the shareholders. So using AI for good doesn't bring in the big bucks versus how we hear it is being used. And I think that's why we are tired.
The stories that don't make the big headlines? AI helping researchers identify new antibiotics, assisting in early cancer detection, or helping climate scientists model complex environmental systems. These applications don't generate the same investor excitement as "AI will replace all knowledge workers," but they represent the technology's actual potential to benefit humanity. AI working with humans and enhancing our capabilities, rather than replacing us entirely.
What You Can Do About AI Fatigue
The exhaustion is real, but we can't afford to tune out completely. Here's how to cut through the noise:
💼 Protect Yourself: If you're in a role potentially affected by AI, start upskilling now. Focus on skills that complement rather than compete with AI.
📰 Diversify Your Sources: Seek out news about AI applications in healthcare, climate science, and education, not just the latest venture capital overhyped startup.
🔍 Follow the Money: When you see AGI predictions, check if they coincide with funding announcements or earnings calls. Ask yourself: who benefits from this timeline?
📊 Demand Specifics: When companies claim AI success, look for concrete data. How many jobs were actually saved vs. eliminated? What are the real performance metrics?
🗳️ Support Responsible AI Legislation: Contact your representatives about AI regulation that protects workers and requires transparency in AI deployment decisions.
The future of AI doesn't have to be a zero-sum game between human workers and machines. But that requires us to demand better—better transparency, better regulation, and better priorities from the companies building these systems. We're tired because we're watching technology that could solve real problems being used primarily to line shareholders' pockets.
Don't let the fatigue win. Stay informed, stay skeptical, and keep asking the hard questions about who really benefits from the AI revolution.
This post was written by me, with editing support from AI tools, because even writers appreciate a sidekick.
References:
Bouthillier, B. (2025, June 24). The executive's dilemma in the age of AI whiplash. MedTech World. https://med-tech.world/news/executives-dilemma-ai-whiplash/#:~:text=The%20Real%20Risk:%20Slow%2DRolling,and%20hope%20for%20the%20best.
Haun, L. (2025, May 19). Klarna claimed AI was doing the work of 700 people. Now it's rehiring. Reworked. https://www.reworked.co/employee-experience/klarna-claimed-ai-was-doing-the-work-of-700-people-now-its-rehiring/
Kamps, H.J. (2024, March 19). Nvidia’s Jensen Huang says AI hallucinations are solvable, artificial general intelligence is 5 years away. TechCrunch. https://techcrunch.com/2024/03/19/agi-and-hallucinations/
Kozlowski, M. (2025, January 8). How OpenAI's Sam Altman is thinking about AGI and superintelligence in 2025. TIME. https://time.com/7205596/sam-altman-superintelligence-agi/
O'Sullivan, I. (2025, May 16). The companies that have already replaced workers with AI in 2024 & 2025. Tech.co https://tech.co/news/companies-replace-workers-with-ai
Spodarets, D. (2025, January 30). AI could double the human lifespan in the next 5 years: Anthropic CEO Dario Amodei. DataPhoenix. https://dataphoenix.info/ai-could-double-the-human-lifespan-in-the-next-5-years-anthropic-ceo-dario-amodei/
Yang, J. & Edic, G. (2025, June 7). How AI may be robbing new college graduates of traditional entry-level jobs. [Video]. PBS News Weekend. https://www.pbs.org/newshour/show/how-ai-may-be-robbing-new-college-graduates-of-traditional-entry-level-jobs
Very well written, and a realistic view of the issues. One of the primary reasons I started a Substack was because so much of the mainstream narrative around AI feels inconsequential compared to what's actually happening. Very few people are willing to move past hyperbole and get to the real conversations that matter.
It’s good to see others leaning into thoughtful, grounded critique instead of just echoing whatever gets the most clicks. Keep going—this is the kind of perspective that cuts through the noise.
Yes to all of this. I hit a wall with AI fatigue earlier this year—it felt like drinking from a firehose. Learning to slow down, ask better questions, and use AI as a tool (not a trend) has helped me reconnect with what matters.