🎙️
AIPodify

Diary of a CEO

AI Whistleblower: We Are Being Gaslit By AI Companies, They’re Hiding The Truth! - Karen Hao

Guest: Karen HaoMarch 26, 2026
AI Whistleblower: We Are Being Gaslit By AI Companies, They’re Hiding The Truth! - Karen Hao

Episode Summary

AI-generated · Apr 2026

AI-generated summary — may contain inaccuracies. Not a substitute for the full episode or professional advice.

Karen Hao, an award-winning investigative journalist who previously covered AI for MIT Technology Review and The Wall Street Journal, unpacks her book, "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI." Hao contends that much of the AI industry's current trajectory is "extremely inhumane," driven by an "imperial agenda" that exploits labor, monopolizes knowledge, and actively gaslights the public. She draws stark parallels between today's AI giants and historical empires, arguing that their pursuit of "artificial general intelligence" (AGI) is less about public benefit and more about profit and control.

👤 Who Should Listen

  • Anyone concerned about the ethical implications and societal impact of artificial intelligence.
  • Entrepreneurs and business leaders navigating the challenges and opportunities of integrating AI into their operations and workforce.
  • Journalists and media consumers interested in understanding how tech companies influence public perception and control narratives.
  • Individuals questioning the future of work and career progression in an AI-driven economy.
  • Policymakers and regulators seeking to understand the power structures and governance issues within the AI industry.
  • Those interested in the behind-the-scenes dynamics and leadership controversies within major AI companies like OpenAI.

🔑 Key Takeaways

  1. 1.AI companies operate with an "imperial agenda" characterized by laying claim to unowned resources (data, intellectual property), exploiting vast amounts of labor, and monopolizing knowledge production to benefit their own interests.
  2. 2.AI leaders, including Sam Altman, use ambiguous and shifting definitions of "Artificial General Intelligence" (AGI) and narratives of existential risk (e.g., "summoning the demon") to mobilize capital, recruit talent, and strategically ward off regulation.
  3. 3.The AI industry actively suppresses inconvenient research and censors critics, as exemplified by the firing of Dr. Timnit Gebru from Google and OpenAI's reported subpoenaing of watchdog groups.
  4. 4.Concerns over Sam Altman's leadership, specifically his role in creating "chaos" and instability at OpenAI and perceived inconsistencies in his business dealings, led to his temporary dismissal by the independent board members.
  5. 5.Mass job displacement is occurring not solely due to AI models' capabilities, but also from executive decisions to replace workers with "good enough" AI, leading to a "breaking of the career ladder" where entry and mid-level roles are automated and new jobs are often worse (e.g., data annotation).
  6. 6.The premise that AI systems are inherently intelligent or will scale to human-like general intelligence is a scientific hypothesis, primarily held by some AI researchers, that is not universally agreed upon by neuroscientists or psychologists.
  7. 7.The guest proposes that the ultimate, irreplaceable value in a future dominated by advanced AI will be found in "irreducibly human stuff," such as in-person connection, deep relationships, and strong social community engagements.
  8. 8.Klarna CEO Sebastian Siemiatkowski confirms that AI has enabled the company to reduce its headcount from 7,400 to 3,300 while doubling revenue, with AI handling 70% of customer service, reinforcing the impact on employment.

💡 Key Concepts Explained

Empire of AI

A framework developed by Karen Hao to describe the AI industry's operational model. It posits that AI companies mimic historical empires by laying claim to unowned resources (like vast amounts of data and intellectual property), exploiting labor globally for training models, and monopolizing knowledge production through controlled research and censorship, all while presenting a narrative of progress and competing against "evil empires" (like China or other tech giants).

AGI as a Flexible Definition

The guest highlights how terms like "Artificial General Intelligence" (AGI) are strategically and opportunistically redefined by AI leaders depending on the audience. For Congress, AGI might cure diseases; for consumers, it's a digital assistant; for investors, it's a revenue generator. This ambiguity serves to mobilize support, attract capital, and ward off regulation without committing to a concrete or universally agreed-upon definition.

Jagged Frontier of AI Models

Karen Hao uses this concept to explain that despite claims of creating "everything machines," AI models possess a "jagged intelligence." This means their capabilities are not uniformly advanced but rather excel in specific areas chosen by developers based on financial lucrativeness (e.g., finance, law, medicine, commerce). This selective advancement is a result of focused data gathering and training, rather than a natural, generalized learning process akin to human intelligence.

⚡ Actionable Takeaways

  • Critically evaluate the public narratives presented by AI companies regarding AGI, existential threats, and societal benefits, recognizing their potential to serve corporate agendas.
  • Question the underlying assumptions and scientific hypotheses that drive current AI development, particularly the notion that intelligence is solely a statistical function of the brain.
  • Be aware of how AI companies use "access" and funding to influence journalism and research, and seek out independent sources for information on AI's limitations and societal impacts.
  • Consider the long-term implications of AI on career paths by focusing on developing deep expertise, skills in orchestrating AI agents ("agent maxing"), or strong interpersonal ("IRL people") skills, which the host identifies as highly valuable.
  • Advocate for or support governance structures that ensure broad participation and accountability in AI development, moving away from systems that consolidate decision-making power in the hands of a few tech leaders.
  • Reflect on the value of human connection and in-person experiences in your own life and communities, as these may become increasingly precious and irreplaceable in an AI-driven world.

⏱ Timeline Breakdown

00:00Karen Hao discusses the inhumane aspects of the AI industry and the need to break up AI empires.
01:01Hao outlines parallels between AI empires and old empires: IP claims, labor exploitation, environmental issues, and legislative manipulation.
02:49Introduction of Karen Hao and her book, "Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI."
03:03Karen shares her background in mechanical engineering and how a pivotal moment in a tech startup led her to journalism and AI critique.
05:04Hao details her research for the book, including over 250 interviews, with 90+ from current and former OpenAI employees.
06:58Discussion on the origins of the AI field in 1956 and the problematic lack of a scientific consensus on human intelligence.
08:09How AI companies, particularly OpenAI, define and redefine "Artificial General Intelligence" (AGI) based on their audience and convenience.
10:10Sam Altman's 2015 blog post on AI as an existential threat, mirroring Elon Musk's language to secure his co-founding of OpenAI.
12:12Hao discusses allegations of Sam Altman manipulating Elon Musk and later orchestrating his departure from OpenAI.
15:14Karen describes Sam Altman as a highly polarizing figure, perceived as either a genius or manipulative.
16:14The story of Dario Amodei, Anthropic CEO, who left OpenAI after feeling manipulated by Altman's vision.
18:17Discussion of Ilia Sutskever's departure from OpenAI and his growing concern that Altman was undermining both AGI development and safety.
19:19Ilia Sutskever's 2019 quote comparing AI's potential relationship with humans to humans' treatment of animals.
20:19Debate on the definition of intelligence, particularly the hypothesis that human brains are statistical models, which underpins much of current AI development.
23:21Karen questions the core goal of building AI systems to duplicate and replace humans, arguing technology should improve human flourishing.
25:24Elaboration on the "imperial agenda" of AI companies, including resource claims, labor exploitation, and knowledge monopolization.
27:25Example of censorship: Dr. Timnit Gebru's firing from Google for critical AI research on large language models.
28:26OpenAI's alleged campaign of intimidation, including subpoenaing critics to map out their networks.
30:27AI empires use a "good empire vs. evil empire" narrative (e.g., OpenAI vs. Google/China) to justify their power and resource acquisition.
31:28AI leaders strategically use both utopian and catastrophic narratives to justify their anti-democratic approach to development.
32:29Sam Altman's tweet about upcoming books, which Karen believes was a direct reference to her book, "Empire of AI."
33:29Karen's past negative experience with OpenAI, who initially refused to cooperate with her due to her critical reporting.
35:33OpenAI's withdrawal of cooperation for Karen's book after Sam Altman's temporary firing, despite her prior efforts to engage them.
37:35Discussion on how tech companies use "access" as a tactic to control journalists and shape public narratives.
41:38A scene-by-scene recounting of the events leading to Sam Altman's temporary firing from OpenAI by the independent board members.
44:44Specific reasons for Altman's firing, including creating instability, pitting teams against each other, and handling of the OpenAI startup fund.
49:54The board's decision to fire Altman swiftly without informing key stakeholders like Microsoft, which ultimately led to his re-installation.
53:01Ilia Sutskever's quote: "I don't think Sam is the guy who should have the finger on the button for AGI."
54:01The pattern of key OpenAI figures (Ilia, Mira) leaving after clashing with Altman and founding their own AI companies.
56:02The host notes that tech billionaires often start their own AI companies after falling out, aiming for control over their specific vision.
57:03The host questions if AI companies pursue power and historical significance, even with a 20-25% chance of catastrophic outcomes.
58:04Karen explains that the "summoning the demon" narrative is a strategic act of speech to persuade others to cede power and resources.
01:00:06Hao discusses the "Dune" analogy, where AI leaders blur the lines between myth-making and genuine belief in their own narratives.
01:03:11The role of cognitive dissonance in AI leaders who fundraise while acknowledging potential catastrophic futures.
01:04:11Karen argues that the governance structure of AI is more critical than the individual moral compass of its leaders.
01:06:14The host plays devil's advocate, presenting the "China threat" argument for accelerating US AI research.
01:07:14Karen refutes the "China threat" argument, questioning the foundational assumptions about intelligence and scaling.
01:08:17Discussion on "narrow intelligence" of AI models, which are selectively advanced for financially lucrative capabilities.
01:10:19Karen explains why the hypothesis of the brain as a statistical engine is debated by neuroscientists and psychologists.
01:11:20The host and guest discuss the "jagged intelligence" of AI models and their inability to transfer learning like humans.
01:12:21Comparing AI model errors to human errors and the perceived safety records of autonomous vehicles.
01:13:23Karen's skepticism about widespread fully autonomous self-driving cars in the near future due to technical, social, and legal issues.
01:14:24The host brings up Sebastian Siemiatkowski (Klarna CEO) on AI's impact on job displacement.
01:15:24Karen agrees on job impacts but attributes them also to executive choices and "good enough" models, not just automation.
01:16:26Live call with Sebastian Siemiatkowski, who shares Klarna's experience of reducing headcount from 7,400 to 3,300, doubling revenue, and AI handling 70% of customer service.
01:17:26Sebastian believes coding is largely "resolved" by AI and that human interaction will be valued higher in the future.
01:18:27Karen elaborates on the creation of "worse jobs" like data annotation and the breaking of career ladders due to AI.
01:19:27The host discusses his hiring strategy, prioritizing deep expertise, "agent maxers," and "IRL people skills" in an AI-driven environment.
01:21:28Karen agrees that AI might force humanity to focus on what makes us uniquely human, fostering deeper connections.
01:22:28The host cites data suggesting a return to valuing IRL experiences among younger generations, potentially driven by technology fatigue.
01:23:29The host predicts Optimus robots will transform manual labor, pushing humans towards roles only they can perform.

💬 Notable Quotes

They profit enormously off of this myth. You know, I have all these internal documents showing that they're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.
Why are we trying to build AI systems that are duplicative of humans?... The purpose of technology throughout history has been to improve human flourishing, not to replace people.
If most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis?
The board fired Sam Altman... he is the root of the problem. It's not they they they were trying to say to these independent board members like the problem will not be fixed unless Alman is removed because of the way that he's pitting teams against each other and creating this environment where people are unable to trust each other anymore and they're competing rather than collaborating on what's supposed to be this really really important technology.

More from this guest

Karen Hao

📚 Books Mentioned

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao
Amazon →

Listen to Full Episode

📬 Get weekly summaries like this one

No spam. Unsubscribe anytime. By subscribing you agree to our Privacy Policy.