πŸŽ™οΈ
AIPodify

Ranked List

Best Podcast Episodes About Scaling laws

We've compiled 3 podcast episodes about scaling laws from Invest Like the Best, Lex Fridman Podcast and distilled each into AI-generated summaries, key takeaways, and actionable insights. Guests like Gavin Baker have covered this topic in depth. Each episode is scored by depth of insight β€” the most information-dense conversations are ranked first so you can skip straight to the best.

3 Episodes Ranked by Insight Depth

#1

Invest Like the Best

GPUs, TPUs, & The Economics of AI Explained | Gavin Baker Interview

  • β†’To truly understand AI's capabilities, investors and researchers must use the highest paid tiers of frontier models like Gemini Ultra or Super Grock, as free versions are analogous to judging an adult's potential based on a 10-year-old's abilities.
  • β†’Scaling laws for AI pre-training are empirically intact, as reaffirmed by Gemini 3, but post-training progress has been driven by new scaling laws: reinforcement learning with verified rewards (RLVR) and test-time compute, which bridged an 18-month gap in hardware development.
Read β†’
#2

Lex Fridman Podcast

State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490

  • β†’The 'DeepSeek moment' in January 2025, when the Chinese company DeepSeek released near-state-of-the-art open-weight models with allegedly less compute, ignited a furious global AI competition [02:05].
  • β†’While US models like Claude Opus 4.5 and ChatGPT currently offer superior output quality for paying users, a growing number of Chinese companies like Z.ai, Minimax, and Kimi Moonshot are releasing increasingly strong open-weight models with highly permissive licenses [05:12, 20:33, 35:10].
Read β†’
#3

Lex Fridman Podcast

Jensen Huang: NVIDIA - The $4 Trillion Company & the AI Revolution | Lex Fridman Podcast #494

  • β†’NVIDIA has transitioned to "extreme co-design" across the entire computing stack, from individual components like GPUs and CPUs to full data center infrastructure, to solve complex distributed problems that no longer fit a single computer.
  • β†’The necessity for extreme co-design arises from the goal to achieve speedups "a million times faster" than simply increasing the number of computers, requiring intricate sharding of algorithms, data, and models.
Read β†’