Reiner Pope – The math behind how LLMs are trained and served

·Dwarkesh Patel··

Did a very different format with Reiner Pope - a blackboard lecture where he walks through how frontier LLMs are trained and served.It’s shocking how much you can deduce about what the labs are doing from a handful of equations, public API prices, and some chalk.It’s a bit technical, but I encourage you to hang in there – it’s really worth it.There are less than a handful of people in the world who understand the full stack of AI, from chip design to model architecture, as well as Reiner. It was...

Read full article →

Related Articles

Accelerating Gemma 4: faster inference with multi-token prediction drafters
amrrs · Hacker News · 3d ago
ProgramBench: Can language models rebuild programs from scratch?
jonbaer · Hacker News · 1d ago
ZAYA1-8B matches DeepSeek-R1 on math with less than 1B active parameters
steveharing1 · Hacker News · 1d ago
OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors
donsupreme · Hacker News · 6d ago
A couple million lines of Haskell: Production engineering at Mercury
unignorant · Hacker News · 6d ago