Topic Guide
What Is Gpus?
Gpus is a subject covered in depth across 2 podcast episodes in our database. Below you'll find key concepts, expert insights, and the top episodes to listen to β all distilled from hours of conversation by leading experts.
Key Concepts in Gpus
Scaling laws for pre-training
These are empirical observations that predict how model performance improves with increased compute, data, and model size during the initial training phase. Gemini 3 notably confirmed these laws remain intact, despite researchers not fully understanding the underlying 'how' or 'why' they work.
Scaling laws for post-training
Two new scaling laws driving recent AI progress: Reinforcement Learning with Verified Rewards (RLVR) and test-time compute. RLVR involves training AI models using outcomes that can be objectively verified (e.g., did a sale convert, did the model balance the books), while test-time compute refers to allowing models to 'think' or process for longer during inference. These laws enabled significant progress even when pre-training hardware was stalled.
Low-cost producer of tokens
In the AI industry, this refers to the entity that can generate AI output (tokens) at the lowest computational cost. Gavin Baker highlights that Google's TPUs have given them this advantage, allowing them to exert economic pressure on competitors. This metric is uniquely important in AI, unlike most traditional tech industries where low-cost production hasn't been the primary driver of market value.
Reasoning (in ai)
Refers to an AI model's ability to 'think' or chain together logical steps to solve problems or generate more sophisticated outputs. The advent of reasoning models (like the first from OpenAI) significantly accelerated AI intelligence levels and enabled a 'flywheel' where user interactions and verifiable outcomes can be fed back to continuously improve the model.
Data centers in space
A visionary concept for AI infrastructure, proposing orbiting satellites equipped with chips for compute. From first principles, they offer superior conditions: constant, intense solar power (eliminating batteries) and free cooling to near absolute zero in the vacuum of space (reducing complex cooling systems), potentially leading to faster, lower-cost inference and training via laser communication.
Saas ai margin mistake
The error made by incumbent Software-as-a-Service (SaaS) companies who are reluctant to adopt AI agents because it would reduce their traditional 70-90% gross margins to 35-40%. This is compared to brick-and-mortar retailers ignoring e-commerce. Gavin argues this will leave them vulnerable to AI-native startups willing to operate at these lower margins, eventually leading to their displacement.
What Experts Say About Gpus
- 1.The CoreWeave CEO asserts that the GPU depreciation debate, particularly regarding the H100's shelf life, is "nonsense" primarily fueled by short sellers trying to depress stock prices.
- 2.CoreWeave's business model involves clients securing compute resources for an average of five years, with some contracts extending to six years.
- 3.This contractual reality directly contradicts claims of GPUs becoming obsolete in as little as 16 months, as customers are willing to pay for longer-term usage.
- 4.CoreWeave utilizes a six-year depreciation period for its GPUs, despite believing the hardware will effectively last even longer.
- 5.The CEO's pragmatic approach to asset valuation states that if customers are willing to pay for a technology, it inherently retains its value.
- 6.To truly understand AI's capabilities, investors and researchers must use the highest paid tiers of frontier models like Gemini Ultra or Super Grock, as free versions are analogous to judging an adult's potential based on a 10-year-old's abilities.