True Zero-shot MT

·Sebastian Ruder··

Little over a week ago, Gemini 1.5 reported close to human-level performance on MTOB, a recent challenging translation dataset. In this newsletter, we’ll dig into this result, explore true zero-shot machine translation (MT), and consider how to teach LLMs a new language like humans.Low-resource MTTo set the scene, let’s first consider what it means for a language to be considered “low-resource”. As with LLMs, the performance of MT models depends on the amount of training data—both parallel and m...

Read full article →

Related Articles

Accelerating Gemma 4: faster inference with multi-token prediction drafters
amrrs · Hacker News · 4d ago
OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors
donsupreme · Hacker News · 6d ago
ProgramBench: Can language models rebuild programs from scratch?
jonbaer · Hacker News · 2d ago
ZAYA1-8B matches DeepSeek-R1 on math with less than 1B active parameters
steveharing1 · Hacker News · 2d ago
A couple million lines of Haskell: Production engineering at Mercury
unignorant · Hacker News · 6d ago