Sporks of AGI

·Sergey Levine··

Training big models is really hard, and as the models get bigger and expand into new domains, it’s only getting harder. LLMs use lots of text data, while VLMs require data with text and images, and vision-language-action (VLA) models in robotics require lots of data of robots performing real tasks in the real world. This hits agents especially hard: whether you want to control a real-world robot or take actions to fulfill user requests on the web, data of real-world interactions with action labe...

Read full article →

Related Articles

Accelerating Gemma 4: faster inference with multi-token prediction drafters
amrrs · Hacker News · 3d ago
ProgramBench: Can language models rebuild programs from scratch?
jonbaer · Hacker News · 1d ago
ZAYA1-8B matches DeepSeek-R1 on math with less than 1B active parameters
steveharing1 · Hacker News · 1d ago
OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors
donsupreme · Hacker News · 6d ago
A couple million lines of Haskell: Production engineering at Mercury
unignorant · Hacker News · 6d ago