Diary of a CEO
AI Whistleblower: We Are Being Gaslit By The AI Companies! They’re Hiding The Truth About AI!

Episode Summary
AI-generated · Mar 2026AI-generated summary — may contain inaccuracies. Not a substitute for the full episode or professional advice.
Journalist and author Karen Hao, previously covering AI for MIT Technology Review and The Wall Street Journal, shares insights from her book, *Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI*. Hao argues that much of the contemporary AI industry operates in an 'extremely inhumane' manner, accusing companies of 'gaslighting' the public by 'hiding the truth about AI' to 'extract and exploit' through a strategic 'imperial agenda.' Her work, based on interviews with over 250 people including former OpenAI employees, draws parallels between current AI giants and historical empires.
Hao elaborates on the 'imperial agenda,' outlining how AI companies lay claim to resources such as individuals' data and the intellectual property of artists and writers, and engage in 'land grabbing' for supercomputer facilities. She highlights the 'extraordinary amount of labor' exploited, with hundreds of thousands of global contractors training models, often breaking career ladders by perpetuating layoffs. Furthermore, Hao details how these entities 'monopolize knowledge production' by bankrolling most AI researchers and censoring inconvenient findings, citing the firing of Dr. Timnit Gebru from Google's ethical AI team for critical research, and OpenAI's alleged subpoenaing of critics.
A core aspect of this agenda, according to Hao, is the strategic 'mythmaking' around Artificial General Intelligence (AGI). She reveals how OpenAI, for instance, defined AGI inconsistently—as a cure for cancer for Congress, a digital assistant for consumers, or a revenue generator for Microsoft—to mobilize different audiences and ward off regulation. Hao contends that this ambiguity, coupled with narratives of existential risk, serves to justify an 'anti-democratic approach' to AI development, centralizing control. She also challenges the scientific consensus around the 'brains are statistical models' hypothesis, a belief central to AI scaling, noting significant disagreement from neuroscientists and psychologists.
Hao questions the fundamental goal of building AI systems 'duplicative of humans,' arguing technology should 'improve human flourishing, not to replace people.' She details the significant impact on employment, observing a decline in white-collar hiring and the creation of 'way worse jobs' in data annotation, where laid-off professionals secretly train models for the very tasks they once performed. This 'breaks the career ladder' by gouging out entry-level and mid-tier roles. The host, Steven Bartlett, further explores this by sharing insights from Klarna CEO Sebastian Siemiatkowski (via host's call) on AI's role in halving headcount while doubling revenue, and his own hiring strategies focusing on deep expertise, 'agent maxing' young talent, and 'IRL people skills.'
Ultimately, Hao asserts that focusing on the ethics of individual leaders distracts from the more critical issue of the 'anti-democratic' governance structure that grants a few companies decisions affecting billions globally. She challenges the 'US vs. China' competitive scaling argument, suggesting it's a 'myth' used to justify power accumulation. The episode concludes by pondering AI's long-term societal impact, with Bartlett's contrarian view that by automating mundane tasks, AI might inadvertently push humanity back towards 'irreplaceably human stuff' like genuine connection and in-person community, a promise other technologies failed to deliver.
👤 Who Should Listen
- Anyone interested in the ethical, social, and economic implications of rapid AI development.
- Individuals concerned about the future of work, job displacement, and the creation of 'worse jobs' due to AI.
- Entrepreneurs, business leaders, and investors evaluating the real-world impact and narratives of AI companies.
- Policymakers and regulators seeking critical insights into AI governance, power consolidation, and industry lobbying.
- Journalists and media consumers who want to understand how tech companies influence public perception and control information.
- Those curious about the internal dynamics, power struggles, and leadership styles within prominent AI organizations like OpenAI.
🔑 Key Takeaways
- 1.Karen Hao's book, *Empire of AI*, argues that the AI industry operates inhumanely, gaslighting the public and exploiting resources and labor under an 'imperial agenda.'
- 2.AI companies are accused of 'monopolizing knowledge production' by controlling research funding, setting agendas, and censoring critical findings from scientists and journalists.
- 3.The term 'Artificial General Intelligence' (AGI) is defined ambiguously and inconsistently by OpenAI leaders like Sam Altman, depending on the audience, to serve strategic purposes like fundraising or warding off regulation.
- 4.The 'brains are statistical models' hypothesis, foundational to current AI scaling, is a scientific theory from some AI researchers (e.g., Geoffrey Hinton) that lacks universal consensus among neuroscientists and psychologists.
- 5.AI development often leads to the exploitation of labor, where laid-off workers are then contracted to perform 'data annotation' to train models for the jobs they lost, thereby breaking career ladders.
- 6.The pursuit of AI systems designed to duplicate and replace humans is criticized as fundamentally misaligned with technology's historical purpose of improving human flourishing.
- 7.The frequent departures of co-founders and senior executives from OpenAI (e.g., Elon Musk, Dario Amodei, Ilya Sutskever, Mira Murati) to establish rival AI companies highlight internal clashes over vision and control.
- 8.AI companies are alleged to use 'mythmaking' narratives, including predictions of both utopia and catastrophe, as a tactic to persuade the public and stakeholders to cede power and resources to them.
💡 Key Concepts Explained
Empire of AI
A framework proposed by Karen Hao, detailed in her book, which likens modern AI companies to historical empires that exert control through claiming resources (data, IP), exploiting labor, and monopolizing knowledge production to further their own 'imperial agenda.' This concept highlights how AI development is driven by profit and power rather than public benefit.
Artificial General Intelligence (AGI)
The ambitious, long-term goal of creating AI systems with human-level or superhuman intelligence. The episode presents this as a deliberately ambiguous term used by AI companies like OpenAI to serve varying strategic communication needs, from investor pitches to policy discussions, rather than a clear scientific objective.
Statistical Engines Hypothesis
The scientific hypothesis, notably championed by Geoffrey Hinton and Ilya Sutskever, that human brains fundamentally operate as large statistical models. This concept is important because it underpins the current dominant approach in AI development—building increasingly large statistical models—despite ongoing debate among neuroscientists and psychologists about its validity.
Mythmaking in AI
The intentional creation and propagation of narratives by AI industry leaders, often involving predictions of either utopian abundance or catastrophic existential risk. This is presented as a crucial tactic to persuade the public, attract capital, and ward off regulation, thereby consolidating power and justifying an anti-democratic development approach.
Data Annotation Labor
The essential but often overlooked work performed by human contractors, frequently including highly skilled individuals displaced from other industries, who manually label and categorize vast amounts of data to train and refine AI models. The episode highlights this as a 'worse job' that perpetuates layoffs and breaks career ladders, contributing to labor exploitation.
⚡ Actionable Takeaways
- →Critically evaluate claims made by AI industry leaders, recognizing that narratives around AGI and existential risk can be strategic tools for influence and resource acquisition.
- →Investigate the sources of information about AI, considering whether researchers or journalists are financially supported or influenced by the companies they report on.
- →Question the stated purpose of new AI technologies, asking whether they are designed to replace human labor or to enhance human capabilities and flourishing.
- →Examine the hidden labor and environmental costs of large-scale AI model development, such as the use of low-wage data annotators and extensive energy consumption.
- →Advocate for more democratic and transparent governance structures in AI development to prevent decision-making power from being consolidated among a few powerful companies or individuals.
- →Consider diversifying your skills into areas requiring deep human expertise, high curiosity for 'agent maxing,' or strong in-person relational skills, as these are identified as potentially more 'irreplaceable' in an AI-driven future.
⏱ Timeline Breakdown
💬 Notable Quotes
“So much of what's happening today in the AI industry is extremely inhumane.”
“They're purposely trying to create that feeling within the public so that they can extract and exploit and extract and exploit.”
“Why are we trying to build AI systems that are duplicative of humans? ... the purpose of technology throughout history has been to improve human flourishing, not to replace people.”
“If most of the climate scientists in the world were bankrolled by fossil fuel companies, do you think we would get an accurate picture of the climate crisis? No. And in the same way they employ and bankroll the AI industry employs and bankrolls most of the AI researchers in the world.”
“And do you know what the common feature of all of them is? They profit enormously off of this myth.”
More from this guest
Karen
📚 Books Mentioned
Listen to Full Episode
📬 Get weekly summaries like this one
No spam. Unsubscribe anytime. By subscribing you agree to our Privacy Policy.
Continue Exploring





