Topic
Best Data centers Podcast Episodes
Data centers is covered across 10 podcast episodes in our library, spanning 5 shows and 5 expert guests — including Invest Like the Best, The All-In Podcast, The Knowledge Project. Conversations explore core themes like scaling laws for pre-training, scaling laws for post-training, low-cost producer of tokens, drawing on firsthand experience and research from leading practitioners.
Below you'll find key insights, core concepts, and actionable advice aggregated from the top episodes — followed by a ranked list of the best data centers discussions to explore next.
Key Insights on Data centers
- 1.To truly understand AI's capabilities, investors and researchers must use the highest paid tiers of frontier models like Gemini Ultra or Super Grock, as free versions are analogous to judging an adult's potential based on a 10-year-old's abilities.
- 2.Scaling laws for AI pre-training are empirically intact, as reaffirmed by Gemini 3, but post-training progress has been driven by new scaling laws: reinforcement learning with verified rewards (RLVR) and test-time compute, which bridged an 18-month gap in hardware development.
- 3.Google currently holds a temporary advantage as the "lowest cost producer of tokens" due to its advanced TPUs (v6/v7) and vertically integrated design process, allowing it to strategically undercut competitors and "suck the economic oxygen" out of the AI ecosystem.
- 4.Nvidia's next-generation Blackwell chips, particularly the GB300, are anticipated to shift the cost advantage, making companies utilizing them (especially XAI, which builds data centers fastest) the new low-cost producers of tokens by early 2026.
- 5.Many large tech companies like Meta, Microsoft, and Amazon have struggled to build competitive frontier models, indicating that creating and maintaining a leading AI lab is far more complex than widely perceived, requiring not just capital but also sophisticated infrastructure management and research "taste."
- 6.The "flywheel effect" of user data feeding back into model improvement, absent in early AI, is now beginning to spin with reasoning models, creating more separation among leading labs (OpenAI, Gemini, Anthropic, XAI) that possess advanced internal checkpoints.
Key Concepts in Data centers
Scaling laws for pre-training
These are empirical observations that predict how model performance improves with increased compute, data, and model size during the initial training phase. Gemini 3 notably confirmed these laws remain intact, despite researchers not fully understanding the underlying 'how' or 'why' they work.
Scaling laws for post-training
Two new scaling laws driving recent AI progress: Reinforcement Learning with Verified Rewards (RLVR) and test-time compute. RLVR involves training AI models using outcomes that can be objectively verified (e.g., did a sale convert, did the model balance the books), while test-time compute refers to allowing models to 'think' or process for longer during inference. These laws enabled significant progress even when pre-training hardware was stalled.
Low-cost producer of tokens
In the AI industry, this refers to the entity that can generate AI output (tokens) at the lowest computational cost. Gavin Baker highlights that Google's TPUs have given them this advantage, allowing them to exert economic pressure on competitors. This metric is uniquely important in AI, unlike most traditional tech industries where low-cost production hasn't been the primary driver of market value.
Reasoning (in ai)
Refers to an AI model's ability to 'think' or chain together logical steps to solve problems or generate more sophisticated outputs. The advent of reasoning models (like the first from OpenAI) significantly accelerated AI intelligence levels and enabled a 'flywheel' where user interactions and verifiable outcomes can be fed back to continuously improve the model.
Actionable Takeaways
- ✓Subscribe to the highest paid tiers of frontier AI models (e.g., Gemini Ultra, Super Grock) to accurately assess their capabilities and track progress.
- ✓Follow leading AI researchers and engineers on platforms like X (formerly Twitter) and read their papers (e.g., Andrej Karpathy's work) to stay at the cutting edge of AI developments.
- ✓Listen to podcasts featuring engineers and researchers from the four leading AI labs (OpenAI, Gemini, Anthropic, XAI) to gain direct insights into frontier model progress.
- ✓Utilize AI tools to manage information overload from the rapidly evolving AI landscape, like asking an AI to summarize podcasts or research papers you've consumed.
- ✓For SaaS companies: immediately embrace lower gross margins (e.g., 35-40%) for AI-driven agent strategies to avoid obsolescence and capitalize on existing customer data and distribution, as Microsoft has done with Copilot.
Top Episodes — Ranked by Insight (10)
Invest Like the Best
GPUs, TPUs, & The Economics of AI Explained | Gavin Baker Interview
To truly understand AI's capabilities, investors and researchers must use the highest paid tiers of frontier models like Gemini Ultra or Super Grock, as free versions are analogous to judging an adult's potential based on a 10-year-old's abilities.
The All-In Podcast
Software Stocks Implode, Claude's Hit List, State of the Union Reactions, Trump's Tariff Pivot
Anthropic's AI products, specifically "Claude Co-work," "Claude Code Security," and "Claude modernizing Cobalt databases," have been linked to significant market cap losses in legal, security, and banking sectors, impacting companies like IBM.
The All-In Podcast
Two Legendary Founders: Travis Kalanick & Michael Dell Live from Austin, Texas
Travis Kalanick's new company, Atoms (formerly City Storage Systems), is focused on digitizing the physical world by building an "atoms-based computer" that applies CPU, storage, and network principles to manufacturing, real estate, and logistics in various industries.
The Knowledge Project
The CEO Who Manages $1 Trillion: How to De-Risk Deals, Deploy Capital & Build Wealth | Connor Teskey
Brookfield manages approximately $1 trillion, globally allocated across 60 countries, primarily focusing on "high-quality assets that make up the backbone of the global economy" [00:03, 04:47].
The Knowledge Project
The CEO Who Manages $1 Trillion: AI, Opportunities, and Risk | Connor Teskey
Brookfield's investment strategy focuses on high-quality assets that constitute the "backbone of the global economy," a definition that continually evolves from hydro dams to solar farms and from ports to data centers.
The All-In Podcast
Iran War, Oil Shock, Off Ramps, AI's Revenue Explosion and PR Nightmare
Brent crude oil prices have seen massive volatility, spiking from $84 to $119 per barrel amidst the "Iran War," leading Goldman Sachs to raise PCE inflation forecasts to 2.9% and lower GDP projections by 30 basis points.
Darknet Diaries
There's No Way Into This Tech Company's Server Room ... Except Through the Sewer💧Episode 166: Maxie
Physical penetration testing often leverages open-source intelligence (OSINT) to identify potential entry points or pretexts, such as knowledge of a company's international connections or maintenance schedules (08:35, 45:59).
The Knowledge Project
Brookfield's C.E.O. on Why They Lock In Everything Before Breaking Ground
Brookfield's core strategy for large-scale projects involves locking in four key contracts—CapEx, off-take, EPC, and financing—all at once before putting any capital in the ground.
The All-In Podcast
Friedberg’s Datacenter Wake-Up Call: If We Don't Build Them Here, Other Countries Will
Data centers are a globally mobile industry, not geographically constrained by data transfer speeds, creating international competition for their development.
My First Million
Anthropic did $6B in revenue in one month.
Anthropic achieved an unprecedented $6 billion in revenue in a single month, specifically February.
Episodes ranked by insight density — scored on key takeaways, concepts explained, and actionable advice. AI-generated summaries; listen to full episodes for complete context.














