Why AI Companies Want You to Be Afraid of Them
Summary
What This Article Is About
BBC technology journalist Thomas Germain investigates a striking paradox at the heart of the AI industry: the very companies building and selling powerful AI models — including Anthropic and OpenAI — regularly warn that their own products could endanger humanity. The article uses Anthropic’s launch of Claude Mythos, a cybersecurity AI model framed as too dangerous for public release, as the latest example of what critics call fear-based marketing. Security experts questioned whether Mythos’s capabilities were as unprecedented as claimed, and OpenAI CEO Sam Altman — himself not above similar rhetoric — called it “incredible marketing” to build a bomb and then sell the shelter.
Germain traces this pattern back to OpenAI’s 2019 announcement of GPT-2, which was declared too dangerous to release and then quietly launched months later. Critics like linguist Emily M. Bender and philosopher Shannon Vallor argue that catastrophic existential risk warnings function as a deliberate distraction — drawing attention away from concrete, present-day AI harms such as environmental damage, misinformation, healthcare misdiagnoses, and links to mental health crises. The article suggests this pattern also serves a regulatory capture function: by positioning themselves as uniquely dangerous, AI companies frame regulation as something only they are qualified to manage.
Key Points
Main Takeaways
Fear Is a Marketing Strategy
AI companies routinely warn that their own products could destroy humanity — a practice critics call fear-based marketing that makes products sound more powerful and consequential than they may be.
Claude Mythos Sparked the Debate
Anthropic’s Claude Mythos, framed as too dangerous for public release due to unprecedented cybersecurity capabilities, reignited questions about whether such warnings reflect genuine concern or are calculated hype.
GPT-2 Set the Template
In 2019, OpenAI declared GPT-2 too dangerous to release, only to publish it months later — a pattern that established the industry playbook of catastrophising products before making them widely available.
Real Harms Are Being Ignored
Critics argue that existential doom narratives distract from concrete present-day AI harms — environmental damage from data centres, healthcare misdiagnoses, deepfakes, and AI-linked mental health crises.
Regulation Benefits the Powerful
By framing AI as uniquely dangerous, large companies can shape regulation in ways that raise barriers for competitors — a strategy Meta’s Yann LeCun and others have called “regulatory capture” by incumbent AI labs.
Safety Promises Go Unfulfilled
Anthropic reportedly abandoned its flagship policy of never training a model without guaranteed safety measures — raising serious questions about whether its public safety commitments are marketing rather than principle.
Master Reading Comprehension
Practice with 365 curated articles and 2,400+ questions across 9 RC types.
Article Analysis
Breaking Down the Elements
Main Idea
Fear Is the Product
Leading AI companies have systematically used catastrophic warnings about their own products as a marketing and political strategy. By positioning themselves as uniquely dangerous, they generate publicity, justify restricted access, and shape regulatory conversations — all while deflecting scrutiny from the concrete harms AI is causing right now. The article argues this pattern is industry-wide, not unique to any single company.
Purpose
To Expose a Corporate Contradiction
Germain writes to reveal the gap between what AI companies say and what they do. He challenges readers to question whether fear-filled announcements about AI capabilities reflect genuine safety concern, competitive self-interest, or both. The article is a piece of accountability journalism aimed at a tech-literate general audience that has grown accustomed to AI doom headlines without examining their strategic function.
Structure
Hook → Pattern → Critics → Counter → Real Harms
The article opens with Anthropic’s Claude Mythos launch as the hook, establishes this as part of a longstanding industry pattern going back to GPT-2 in 2019, then introduces voices of academic critics — Bender and Vallor — before noting Altman’s own contradictory critiques of Anthropic. It closes by cataloguing actual present-day AI harms that the doom narrative overshadows, ending on a note of unresolved tension rather than easy conclusion.
Tone
Sceptical, Incisive & Balanced
Germain writes with the restrained but pointed scepticism of a beat journalist who has covered too many overhyped tech announcements to be easily impressed. He does not dismiss AI safety concerns outright but consistently foregrounds competing motivations — commercial, regulatory, reputational. The tone is accessible and readable, with moments of dry wit, while remaining fair by including the companies’ own responses and perspectives.
Key Terms
Vocabulary from the Article
Click each card to reveal the definition
Build your vocabulary systematically
Each article in our course includes 8-12 vocabulary words with contextual usage.
Tough Words
Challenging Vocabulary
Tap each card to flip and see the definition
Not supported by evidence or proof — used to describe AI companies’ dramatic claims about their models’ capabilities or dangers that cannot be independently verified.
“It’s just part of this pattern of unsubstantiated claims of power.”
Figuratively, excessively excited or dramatic in tone — used here to describe journalists or commentators who amplify AI company announcements without critical scrutiny.
“Some breathless observers warned that Mythos will soon force you to replace every piece of technology in your life.”
Based on the principle that all people are equal and deserve equal rights and opportunities — used here in OpenAI’s claim that its decisions about AI will be guided by democratic and egalitarian values.
“Key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs.”
To combine or bring together elements to make a stronger, more unified whole — in this context, to concentrate power or control over AI into the hands of a small number of dominant companies.
“OpenAI would ‘resist the potential of this technology to consolidate power in the hands of the few’.”
An incorrect identification of a medical condition — cited in the article as a real, documented risk of deploying AI in healthcare settings, contrasted with the more speculative existential threats companies typically warn about.
“There’s a push for AI in healthcare despite serious concerns about misdiagnoses.”
A severe mental disorder involving a loss of contact with reality — the article cites research suggesting AI chatbots are driving vulnerable individuals to psychosis, a present-day harm overshadowed by doom narratives.
“AI is reportedly driving masses of vulnerable people to the point of psychosis and even suicide.”
Reading Comprehension
Test Your Understanding
5 questions covering different RC question types
1According to the article, Sam Altman has never used fear-based language about AI, and his criticism of Anthropic’s tactics is entirely consistent with his own past statements.
2What happened with OpenAI’s GPT-2 in 2019, according to the article?
3Which sentence best captures Emily M. Bender’s criticism of AI companies’ safety rhetoric?
4Evaluate each statement based on the article.
Anthropic reportedly abandoned its flagship policy to never train an AI model without guaranteed safety measures.
The article concludes that AI doom warnings are always deliberate lies with no genuine basis in safety concern.
Shannon Vallor argues that existential AI risk narratives function as a distraction from present-day harms caused by the industry.
Select True or False for all three statements, then click “Check Answers”
5Based on the article, what can be inferred about why large AI companies might actually benefit from strict government regulation of the AI industry?
FAQ
Frequently Asked Questions
Claude Mythos is an AI model developed by Anthropic that the company described as capable of finding cybersecurity vulnerabilities far beyond the ability of human experts. Anthropic framed it as too dangerous for public release, warning that its fallout for economies, public safety, and national security could be severe. Critics and security experts questioned whether these claims were overstated, and the announcement reignited a wider debate about whether AI companies use danger narratives strategically to generate attention and justify restricting access.
Regulatory capture occurs when the companies being regulated come to dominate the regulatory process itself. Critics like Meta’s chief AI scientist Yann LeCun have argued that dominant AI labs — by lobbying for strict regulation and framing AI as uniquely dangerous — are actually working to shape rules that would raise costs and legal barriers for smaller competitors and open-source developers. The result would be an AI industry effectively controlled by a handful of incumbents, precisely the outcome the regulation was meant to prevent.
The article lists several documented, present-day AI harms that receive less attention than speculative doomsday scenarios: gas-powered data centres emitting greenhouse gases comparable to entire countries; AI systems in healthcare producing serious misdiagnoses; deepfake technology advancing beyond the point of reliable detection; AI chatbots linked to psychosis and suicide in vulnerable users; and growing research suggesting a possible connection between AI usage and cognitive decline. The article argues these real, measurable harms deserve more public and regulatory attention than apocalyptic future scenarios.
Readlite provides curated articles with comprehensive analysis including summaries, key points, vocabulary building, and practice questions across 9 different RC question types. Our Ultimate Reading Course offers 365 articles with 2,400+ questions to systematically improve your reading comprehension skills.
This article is rated Intermediate. The language is accessible and the writing style is journalistic rather than academic, making it readable for a general audience. However, understanding the full argument requires familiarity with concepts like regulatory capture, the competitive dynamics of the AI industry, and the distinction between existential and present-day harms. Readers also need to track multiple voices — Germain’s, Bender’s, Vallor’s, Altman’s — and evaluate contradictions between what these figures say and do.
Thomas Germain is a technology journalist who covers AI, consumer technology, and digital policy. BBC Future is a long-form editorial section of the BBC known for in-depth, evidence-based reporting on technology, science, and society aimed at a global, educated audience. Its credibility lies in its editorial independence, access to expert sources, and commitment to contextualising complex developments rather than chasing breaking news. The article reflects BBC Future’s typical approach: rigorous, sceptical, and accessible.
The Ultimate Reading Course covers 9 RC question types: Multiple Choice, True/False, Multi-Statement T/F, Text Highlight, Fill in the Blanks, Matching, Sequencing, Error Spotting, and Short Answer. This comprehensive coverage prepares you for any reading comprehension format you might encounter.