AI Intermediate Free Analysis

Why AI Companies Want You to Be Afraid of Them

Thomas Germain · BBC April 28, 2026 7 min read ~1,400 words

Summary

What This Article Is About

BBC technology journalist Thomas Germain investigates a striking paradox at the heart of the AI industry: the very companies building and selling powerful AI models — including Anthropic and OpenAI — regularly warn that their own products could endanger humanity. The article uses Anthropic’s launch of Claude Mythos, a cybersecurity AI model framed as too dangerous for public release, as the latest example of what critics call fear-based marketing. Security experts questioned whether Mythos’s capabilities were as unprecedented as claimed, and OpenAI CEO Sam Altman — himself not above similar rhetoric — called it “incredible marketing” to build a bomb and then sell the shelter.

Germain traces this pattern back to OpenAI’s 2019 announcement of GPT-2, which was declared too dangerous to release and then quietly launched months later. Critics like linguist Emily M. Bender and philosopher Shannon Vallor argue that catastrophic existential risk warnings function as a deliberate distraction — drawing attention away from concrete, present-day AI harms such as environmental damage, misinformation, healthcare misdiagnoses, and links to mental health crises. The article suggests this pattern also serves a regulatory capture function: by positioning themselves as uniquely dangerous, AI companies frame regulation as something only they are qualified to manage.

Key Points

Main Takeaways

Fear Is a Marketing Strategy

AI companies routinely warn that their own products could destroy humanity — a practice critics call fear-based marketing that makes products sound more powerful and consequential than they may be.

Claude Mythos Sparked the Debate

Anthropic’s Claude Mythos, framed as too dangerous for public release due to unprecedented cybersecurity capabilities, reignited questions about whether such warnings reflect genuine concern or are calculated hype.

GPT-2 Set the Template

In 2019, OpenAI declared GPT-2 too dangerous to release, only to publish it months later — a pattern that established the industry playbook of catastrophising products before making them widely available.

Real Harms Are Being Ignored

Critics argue that existential doom narratives distract from concrete present-day AI harms — environmental damage from data centres, healthcare misdiagnoses, deepfakes, and AI-linked mental health crises.

Regulation Benefits the Powerful

By framing AI as uniquely dangerous, large companies can shape regulation in ways that raise barriers for competitors — a strategy Meta’s Yann LeCun and others have called “regulatory capture” by incumbent AI labs.

Safety Promises Go Unfulfilled

Anthropic reportedly abandoned its flagship policy of never training a model without guaranteed safety measures — raising serious questions about whether its public safety commitments are marketing rather than principle.

Master Reading Comprehension

Practice with 365 curated articles and 2,400+ questions across 9 RC types.

Start Learning

Article Analysis

Breaking Down the Elements

Main Idea

Fear Is the Product

Leading AI companies have systematically used catastrophic warnings about their own products as a marketing and political strategy. By positioning themselves as uniquely dangerous, they generate publicity, justify restricted access, and shape regulatory conversations — all while deflecting scrutiny from the concrete harms AI is causing right now. The article argues this pattern is industry-wide, not unique to any single company.

Purpose

To Expose a Corporate Contradiction

Germain writes to reveal the gap between what AI companies say and what they do. He challenges readers to question whether fear-filled announcements about AI capabilities reflect genuine safety concern, competitive self-interest, or both. The article is a piece of accountability journalism aimed at a tech-literate general audience that has grown accustomed to AI doom headlines without examining their strategic function.

Structure

Hook → Pattern → Critics → Counter → Real Harms

The article opens with Anthropic’s Claude Mythos launch as the hook, establishes this as part of a longstanding industry pattern going back to GPT-2 in 2019, then introduces voices of academic critics — Bender and Vallor — before noting Altman’s own contradictory critiques of Anthropic. It closes by cataloguing actual present-day AI harms that the doom narrative overshadows, ending on a note of unresolved tension rather than easy conclusion.

Tone

Sceptical, Incisive & Balanced

Germain writes with the restrained but pointed scepticism of a beat journalist who has covered too many overhyped tech announcements to be easily impressed. He does not dismiss AI safety concerns outright but consistently foregrounds competing motivations — commercial, regulatory, reputational. The tone is accessible and readable, with moments of dry wit, while remaining fair by including the companies’ own responses and perspectives.

Key Terms

Vocabulary from the Article

Click each card to reveal the definition

Existential risk
noun phrase
Click to reveal
A threat severe enough to permanently end or fundamentally destroy human civilisation — used in AI discourse to describe worst-case scenarios involving superintelligent or misaligned AI systems.
Regulatory capture
noun phrase
Click to reveal
A situation in which the companies or industries that government agencies are meant to regulate instead come to dominate or control those agencies and the rules they create.
Fear-based marketing
noun phrase
Click to reveal
A commercial strategy that uses warnings of danger or catastrophe to create urgency and public attention around a product — making it appear more powerful, important, or indispensable than it may be.
Safety washing
noun phrase
Click to reveal
The practice of presenting an organisation’s products or policies as safer or more ethically responsible than they actually are, often to improve public image or deflect scrutiny from genuine harms.
Hyperbole
noun
Click to reveal
Exaggerated statements or claims not meant to be taken literally — used in the article to describe the habit of overstating AI capabilities or risks for dramatic effect or competitive advantage.
Accountability journalism
noun phrase
Click to reveal
A form of reporting that scrutinises the actions of powerful institutions or individuals, comparing what they claim against what they actually do, and exposing gaps or contradictions in the public interest.
Catastrophising
verb / gerund
Click to reveal
Describing a situation or outcome as far worse or more dangerous than it actually is — in this article, used to describe how AI companies overstate the risks of their own products at launch.
Frontier model
noun phrase
Click to reveal
An AI model that represents the current leading edge of capability in the industry — typically large, computationally expensive, and released by top-tier labs such as Anthropic, OpenAI, or Google DeepMind.

Build your vocabulary systematically

Each article in our course includes 8-12 vocabulary words with contextual usage.

View Course

Tough Words

Challenging Vocabulary

Tap each card to flip and see the definition

Unsubstantiated un-sub-STAN-shee-ay-ted Tap to flip
Definition

Not supported by evidence or proof — used to describe AI companies’ dramatic claims about their models’ capabilities or dangers that cannot be independently verified.

“It’s just part of this pattern of unsubstantiated claims of power.”

Breathless BRETH-les Tap to flip
Definition

Figuratively, excessively excited or dramatic in tone — used here to describe journalists or commentators who amplify AI company announcements without critical scrutiny.

“Some breathless observers warned that Mythos will soon force you to replace every piece of technology in your life.”

Egalitarian ee-gal-ih-TAIR-ee-un Tap to flip
Definition

Based on the principle that all people are equal and deserve equal rights and opportunities — used here in OpenAI’s claim that its decisions about AI will be guided by democratic and egalitarian values.

“Key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs.”

Consolidate kon-SOL-ih-dayt Tap to flip
Definition

To combine or bring together elements to make a stronger, more unified whole — in this context, to concentrate power or control over AI into the hands of a small number of dominant companies.

“OpenAI would ‘resist the potential of this technology to consolidate power in the hands of the few’.”

Misdiagnosis mis-dy-ag-NOH-sis Tap to flip
Definition

An incorrect identification of a medical condition — cited in the article as a real, documented risk of deploying AI in healthcare settings, contrasted with the more speculative existential threats companies typically warn about.

“There’s a push for AI in healthcare despite serious concerns about misdiagnoses.”

Psychosis sy-KOH-sis Tap to flip
Definition

A severe mental disorder involving a loss of contact with reality — the article cites research suggesting AI chatbots are driving vulnerable individuals to psychosis, a present-day harm overshadowed by doom narratives.

“AI is reportedly driving masses of vulnerable people to the point of psychosis and even suicide.”

1 of 6

Reading Comprehension

Test Your Understanding

5 questions covering different RC question types

True / False Q1 of 5

1According to the article, Sam Altman has never used fear-based language about AI, and his criticism of Anthropic’s tactics is entirely consistent with his own past statements.

Multiple Choice Q2 of 5

2What happened with OpenAI’s GPT-2 in 2019, according to the article?

Text Highlight Q3 of 5

3Which sentence best captures Emily M. Bender’s criticism of AI companies’ safety rhetoric?

Multi-Statement T/F Q4 of 5

4Evaluate each statement based on the article.

Anthropic reportedly abandoned its flagship policy to never train an AI model without guaranteed safety measures.

The article concludes that AI doom warnings are always deliberate lies with no genuine basis in safety concern.

Shannon Vallor argues that existential AI risk narratives function as a distraction from present-day harms caused by the industry.

Select True or False for all three statements, then click “Check Answers”

Inference Q5 of 5

5Based on the article, what can be inferred about why large AI companies might actually benefit from strict government regulation of the AI industry?

0%

Keep Practicing!

0 correct · 0 incorrect

Get More Practice

FAQ

Frequently Asked Questions

Claude Mythos is an AI model developed by Anthropic that the company described as capable of finding cybersecurity vulnerabilities far beyond the ability of human experts. Anthropic framed it as too dangerous for public release, warning that its fallout for economies, public safety, and national security could be severe. Critics and security experts questioned whether these claims were overstated, and the announcement reignited a wider debate about whether AI companies use danger narratives strategically to generate attention and justify restricting access.

Regulatory capture occurs when the companies being regulated come to dominate the regulatory process itself. Critics like Meta’s chief AI scientist Yann LeCun have argued that dominant AI labs — by lobbying for strict regulation and framing AI as uniquely dangerous — are actually working to shape rules that would raise costs and legal barriers for smaller competitors and open-source developers. The result would be an AI industry effectively controlled by a handful of incumbents, precisely the outcome the regulation was meant to prevent.

The article lists several documented, present-day AI harms that receive less attention than speculative doomsday scenarios: gas-powered data centres emitting greenhouse gases comparable to entire countries; AI systems in healthcare producing serious misdiagnoses; deepfake technology advancing beyond the point of reliable detection; AI chatbots linked to psychosis and suicide in vulnerable users; and growing research suggesting a possible connection between AI usage and cognitive decline. The article argues these real, measurable harms deserve more public and regulatory attention than apocalyptic future scenarios.

Readlite provides curated articles with comprehensive analysis including summaries, key points, vocabulary building, and practice questions across 9 different RC question types. Our Ultimate Reading Course offers 365 articles with 2,400+ questions to systematically improve your reading comprehension skills.

This article is rated Intermediate. The language is accessible and the writing style is journalistic rather than academic, making it readable for a general audience. However, understanding the full argument requires familiarity with concepts like regulatory capture, the competitive dynamics of the AI industry, and the distinction between existential and present-day harms. Readers also need to track multiple voices — Germain’s, Bender’s, Vallor’s, Altman’s — and evaluate contradictions between what these figures say and do.

Thomas Germain is a technology journalist who covers AI, consumer technology, and digital policy. BBC Future is a long-form editorial section of the BBC known for in-depth, evidence-based reporting on technology, science, and society aimed at a global, educated audience. Its credibility lies in its editorial independence, access to expert sources, and commitment to contextualising complex developments rather than chasing breaking news. The article reflects BBC Future’s typical approach: rigorous, sceptical, and accessible.

The Ultimate Reading Course covers 9 RC question types: Multiple Choice, True/False, Multi-Statement T/F, Text Highlight, Fill in the Blanks, Matching, Sequencing, Error Spotting, and Short Answer. This comprehensive coverage prepares you for any reading comprehension format you might encounter.

Complete Bundle - Exceptional Value

Everything you need for reading mastery in one comprehensive package

Why This Bundle Is Worth It

📚

6 Complete Courses

100-120 hours of structured learning from theory to advanced practice. Worth ₹5,000+ individually.

📄

365 Premium Articles

Each with 4-part analysis (PDF + RC + Podcast + Video). 1,460 content pieces total. Unmatched depth.

💬

1 Year Community Access

1,000-1,500+ fresh articles, peer discussions, instructor support. Practice until exam day.

2,400+ Practice Questions

Comprehensive question bank covering all RC types. More practice than any other course.

🎯

Multi-Format Learning

Video, audio, PDF, quizzes, discussions. Learn the way that works best for you.

🏆 Complete Bundle
2,499

One-time payment. No subscription.

Everything Included:

  • 6 Complete Courses
  • 365 Fully-Analyzed Articles
  • 1 Year Community Access
  • 1,000-1,500+ Fresh Articles
  • 2,400+ Practice Questions
  • FREE Diagnostic Test
  • Multi-Format Learning
  • Progress Tracking
  • Expert Support
  • Certificate of Completion
Enroll Now →
🔒 100% Money-Back Guarantee
Prashant Chadha

Connect with Prashant

Founder, WordPandit & The Learning Inc Network

With 18+ years of teaching experience and a passion for making learning accessible, I'm here to help you navigate competitive exams. Whether it's UPSC, SSC, Banking, or CAT prep—let's connect and solve it together.

18+
Years Teaching
50,000+
Students Guided
8
Learning Platforms

Stuck on a Topic? Let's Solve It Together! 💡

Don't let doubts slow you down. Whether it's reading comprehension, vocabulary building, or exam strategy—I'm here to help. Choose your preferred way to connect and let's tackle your challenges head-on.

🌟 Explore The Learning Inc. Network

8 specialized platforms. 1 mission: Your success in competitive exams.

Trusted by 50,000+ learners across India
×