🎙️
AIPodify

Diary of a CEO

AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!

Guest: KarenMarch 26, 2026
AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!

Episode Summary

AI-generated · Mar 2026

AI-generated summary — may contain inaccuracies. Not a substitute for the full episode or professional advice.

Journalist and author Karen Hao, previously covering AI for MIT Technology Review and The Wall Street Journal, shares insights from her book, *Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI*. Hao argues that much of the contemporary AI industry operates in an 'extremely inhumane' manner, accusing companies of 'gaslighting' the public by 'hiding the truth about AI' to 'extract and exploit' through a strategic 'imperial agenda.' Her work, based on interviews with over 250 people including former OpenAI employees, draws parallels between current AI giants and historical empires.

Hao elaborates on the 'imperial agenda,' outlining how AI companies lay claim to resources such as individuals' data and the intellectual property of artists and writers, and engage in 'land grabbing' for supercomputer facilities. She highlights the 'extraordinary amount of labor' exploited, with hundreds of thousands of global contractors training models, often breaking career ladders by perpetuating layoffs. Furthermore, Hao details how these entities 'monopolize knowledge production' by bankrolling most AI researchers and censoring inconvenient findings, citing the firing of Dr. Timnit Gebru from Google's ethical AI team for critical research, and OpenAI's alleged subpoenaing of critics.

A core aspect of this agenda, according to Hao, is the strategic 'mythmaking' around Artificial General Intelligence (AGI). She reveals how OpenAI, for instance, defined AGI inconsistently—as a cure for cancer for Congress, a digital assistant for consumers, or a revenue generator for Microsoft—to mobilize different audiences and ward off regulation. Hao contends that this ambiguity, coupled with narratives of existential risk, serves to justify an 'anti-democratic approach' to AI development, centralizing control. She also challenges the scientific consensus around the 'brains are statistical models' hypothesis, a belief central to AI scaling, noting significant disagreement from neuroscientists and psychologists.

Hao questions the fundamental goal of building AI systems 'duplicative of humans,' arguing technology should 'improve human flourishing, not to replace people.' She details the significant impact on employment, observing a decline in white-collar hiring and the creation of 'way worse jobs' in data annotation, where laid-off professionals secretly train models for the very tasks they once performed. This 'breaks the career ladder' by gouging out entry-level and mid-tier roles. The host, Steven Bartlett, further explores this by sharing insights from Klarna CEO Sebastian Siemiatkowski (via host's call) on AI's role in halving headcount while doubling revenue, and his own hiring strategies focusing on deep expertise, 'agent maxing' young talent, and 'IRL people skills.'

Ultimately, Hao asserts that focusing on the ethics of individual leaders distracts from the more critical issue of the 'anti-democratic' governance structure that grants a few companies decisions affecting billions globally. She challenges the 'US vs. China' competitive scaling argument, suggesting it's a 'myth' used to justify power accumulation. The episode concludes by pondering AI's long-term societal impact, with Bartlett's contrarian view that by automating mundane tasks, AI might inadvertently push humanity back towards 'irreplaceably human stuff' like genuine connection and in-person community, a promise other technologies failed to deliver.

👤 Who Should Listen

  • Anyone interested in the ethical, social, and economic implications of rapid AI development.
  • Individuals concerned about the future of work, job displacement, and the creation of 'worse jobs' due to AI.
  • Entrepreneurs, business leaders, and investors evaluating the real-world impact and narratives of AI companies.
  • Policymakers and regulators seeking critical insights into AI governance, power consolidation, and industry lobbying.
  • Journalists and media consumers who want to understand how tech companies influence public perception and control information.
  • Those curious about the internal dynamics, power struggles, and leadership styles within prominent AI organizations like OpenAI.

🔑 Key Takeaways

  1. 1.Karen Hao's book, *Empire of AI*, argues that the AI industry operates inhumanely, gaslighting the public and exploiting resources and labor under an 'imperial agenda.'
  2. 2.AI companies are accused of 'monopolizing knowledge production' by controlling research funding, setting agendas, and censoring critical findings from scientists and journalists.
  3. 3.The term 'Artificial General Intelligence' (AGI) is defined ambiguously and inconsistently by OpenAI leaders like Sam Altman, depending on the audience, to serve strategic purposes like fundraising or warding off regulation.
  4. 4.The 'brains are statistical models' hypothesis, foundational to current AI scaling, is a scientific theory from some AI researchers (e.g., Geoffrey Hinton) that lacks universal consensus among neuroscientists and psychologists.
  5. 5.AI development often leads to the exploitation of labor, where laid-off workers are then contracted to perform 'data annotation' to train models for the jobs they lost, thereby breaking career ladders.
  6. 6.The pursuit of AI systems designed to duplicate and replace humans is criticized as fundamentally misaligned with technology's historical purpose of improving human flourishing.
  7. 7.The frequent departures of co-founders and senior executives from OpenAI (e.g., Elon Musk, Dario Amodei, Ilya Sutskever, Mira Murati) to establish rival AI companies highlight internal clashes over vision and control.
  8. 8.AI companies are alleged to use 'mythmaking' narratives, including predictions of both utopia and catastrophe, as a tactic to persuade the public and stakeholders to cede power and resources to them.

💡 Key Concepts Explained

Empire of AI

A framework proposed by Karen Hao, detailed in her book, which likens modern AI companies to historical empires that exert control through claiming resources (data, IP), exploiting labor, and monopolizing knowledge production to further their own 'imperial agenda.' This concept highlights how AI development is driven by profit and power rather than public benefit.

Artificial General Intelligence (AGI)

The ambitious, long-term goal of creating AI systems with human-level or superhuman intelligence. The episode presents this as a deliberately ambiguous term used by AI companies like OpenAI to serve varying strategic communication needs, from investor pitches to policy discussions, rather than a clear scientific objective.

Statistical Engines Hypothesis

The scientific hypothesis, notably championed by Geoffrey Hinton and Ilya Sutskever, that human brains fundamentally operate as large statistical models. This concept is important because it underpins the current dominant approach in AI development—building increasingly large statistical models—despite ongoing debate among neuroscientists and psychologists about its validity.

Mythmaking in AI

The intentional creation and propagation of narratives by AI industry leaders, often involving predictions of either utopian abundance or catastrophic existential risk. This is presented as a crucial tactic to persuade the public, attract capital, and ward off regulation, thereby consolidating power and justifying an anti-democratic development approach.

Data Annotation Labor

The essential but often overlooked work performed by human contractors, frequently including highly skilled individuals displaced from other industries, who manually label and categorize vast amounts of data to train and refine AI models. The episode highlights this as a 'worse job' that perpetuates layoffs and breaks career ladders, contributing to labor exploitation.

⚡ Actionable Takeaways

  • Critically evaluate claims made by AI industry leaders, recognizing that narratives around AGI and existential risk can be strategic tools for influence and resource acquisition.
  • Investigate the sources of information about AI, considering whether researchers or journalists are financially supported or influenced by the companies they report on.
  • Question the stated purpose of new AI technologies, asking whether they are designed to replace human labor or to enhance human capabilities and flourishing.
  • Examine the hidden labor and environmental costs of large-scale AI model development, such as the use of low-wage data annotators and extensive energy consumption.
  • Advocate for more democratic and transparent governance structures in AI development to prevent decision-making power from being consolidated among a few powerful companies or individuals.
  • Consider diversifying your skills into areas requiring deep human expertise, high curiosity for 'agent maxing,' or strong in-person relational skills, as these are identified as potentially more 'irreplaceable' in an AI-driven future.

⏱ Timeline Breakdown

00:00Introduction to the inhumane nature of the AI industry and the 'imperial agenda'.
01:01Parallels between AI empires and old empires: intellectual property claims, labor exploitation, environmental crisis, and censorship.
03:03Karen Hao's journey into tech journalism from mechanical engineering and her motivation for writing *Empire of AI*.
05:04Details on the extensive research for the book, including interviews with over 90 former/current OpenAI employees.
07:06The historical origin of 'Artificial Intelligence' in 1956 and the lack of scientific consensus on defining human intelligence.
08:09How OpenAI ambiguously defines 'Artificial General Intelligence' (AGI) for different audiences.
10:10Sam Altman's shifting rhetoric on existential risk, specifically to influence Elon Musk into co-founding OpenAI.
12:12Allegations from Elon Musk that Sam Altman manipulated him and later muscled him out of OpenAI.
15:14Karen Hao's perspective on Sam Altman as a highly polarizing and persuasive figure.
16:14Dario Amodei's (Anthropic CEO) experience of feeling manipulated by Altman and leaving OpenAI.
18:17Ilya Sutskever's (OpenAI co-founder) departure due to concerns that Altman undermined AGI safety and development.
19:19Ilya Sutskever's 2019 quote comparing AI's potential relationship with humans to humans' treatment of animals.
20:19Discussion on the scientific hypothesis that human brains are statistical models and the debate around it.
23:21Critique of why the AI industry aims to build human-duplicative systems instead of improving human flourishing.
25:24Karen Hao explains the 'imperial agenda' of AI companies: claiming resources, exploiting labor, monopolizing knowledge.
26:25AI companies 'gaslighting' the public and censoring researchers, exemplified by Dr. Timnit Gebru's firing from Google.
28:26OpenAI's alleged subpoenaing of critics as a campaign of intimidation and information gathering.
30:27The narrative of 'good empire vs. evil empire' (e.g., OpenAI vs. Google/China) used to justify market dominance.
31:28AI leaders using both utopian and catastrophic narratives as part of their myth-making strategy to maintain control.
32:28Sam Altman's tweet hinting at critical books about OpenAI, which Hao identifies as a reference to her own.
33:29Hao details OpenAI's history of refusing cooperation with her due to her critical reporting.
37:35Discussion on how AI companies use access and 'dangling carrots' to control technology journalists.
41:38Scene-by-scene recounting of events leading to Sam Altman's initial firing from OpenAI.
44:44Explanation of 'instability' at OpenAI under Altman, including chaos, internal divisions, and rapid scaling issues.
47:51Discovery by board members that the OpenAI startup fund was actually Sam Altman's personal fund.
49:54Altman's firing and subsequent reinstallation after stakeholder backlash, leading to other key departures.
53:23Ilya Sutskever and Mira Murati's concerns about Altman and their eventual departures from OpenAI.
55:01The pattern of OpenAI co-founders (Musk, Amodei, Sutskever, Murati) leaving to start rival AI companies.
57:03The host questions if AI CEOs knowingly 'summon the demon' for power, linked to Hao's 'Dune' analogy for myth-making.
59:05Hao explains that 'predictions of doom' are strategic 'acts of speech' to persuade others to cede power.
63:11The psychological phenomenon of cognitive dissonance in AI leaders balancing conflicting narratives.
64:11Hao argues that AI governance structures, not just individual leaders, are fundamentally anti-democratic.
66:14Hao challenges the 'US vs. China' competitive scaling argument for AI, questioning its underlying assumptions about intelligence.
68:17Hao explains that current AI models exhibit 'narrow intelligence' focused on financially lucrative capabilities, not general intelligence.
69:17Further discussion on the debate around Geoffrey Hinton's 'brain as statistical engine' hypothesis among scientists.
71:20The difference between human general learning and AI models requiring specific retraining for new contexts (e.g., self-driving cars).
73:23Host suggests AI models might be held to higher standards than humans, and focuses on output/capability over mechanism.
74:24Hao attributes predictions of widespread AI automation to profit motives rather than guaranteed outcomes.
75:24Hao expresses skepticism about Elon Musk's and Geoffrey Hinton's predictions regarding the imminent replacement of surgeons and radiologists.
76:26Hao argues the purpose of technology should be human flourishing, not automation, citing AI's role as a tool in healthcare.
78:30Hao's skepticism about widespread autonomous cars in the US within 10 years, citing technical, social, and legal challenges.
80:34Discussion on mass job displacement: Hao acknowledges impacts but distinguishes automation from executive decisions and rhetoric.
81:35Sebastian Siemiatkowski (Klarna CEO via host's call) confirms AI led to halving staff while doubling revenue, and now AI handles 70% of customer service.
83:35Hao's analysis of job market changes: entry-level roles disrupted, new higher-skilled or 'worse' data annotation jobs, breaking career ladders.
84:35Reference to Anthropic's report indicating a 40% reduction in entry-level jobs susceptible to AI.
85:38Sebastian's point about human preference for human-to-human interaction, even as AI automates.
86:40New York magazine article on laid-off professionals secretly doing data annotation to train AI for jobs they lost.
88:44The host's hiring strategy for his companies: deep expertise, 'agent maxing' young talent, and strong 'IRL people skills'.
91:48Host's contrarian take: AI might make humans focus more on essential in-person connection and community.
92:48Data points supporting the host's take: decline in social media use among Gen Alpha, rise of IRL community events.
94:51Follow-up call with Sebastian Siemiatkowski (Klarna CEO) reiterating AI's efficiency gains in coding and engineering.
97:53Sebastian's optimism for a richer society in the long term, despite short-term job market worries.

💬 Notable Quotes

So much of what's happening today in the AI industry is extremely inhumane.
They're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.
Why are we trying to build AI systems that are duplicative of humans? ... the purpose of technology throughout history has been to improve human flourishing, not to replace people.
If most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis? No. And in the same way they employ and bankroll the AI industry employs and bankrolls most of the AI researchers in the world.
And do you know what the common feature of all of them is? They profit enormously off of this myth.

More from this guest

Karen

📚 Books Mentioned

Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI by Karen Hao
Amazon →

Listen to Full Episode

📬 Get weekly summaries like this one

No spam. Unsubscribe anytime. By subscribing you agree to our Privacy Policy.