The Evolving Landscape of LLM Evaluation

·Sebastian Ruder··

Edit, May 16: Added mention of Benchmarking Benchmark Leakage in Large Language Models (Xu et al. , 2024).Throughout recent years, LLM capabilities have outpaced evaluation benchmarks. This is not a new development.1 The set of canonical LLM evals has further narrowed to a small set of benchmarks such as MMLU for general natural language understanding, GMS8k for mathematical reasoning, and HumanEval for code, among others. Recently, concerns regarding the reliability of even this small set of be...

Read full article →

Related Articles

Accelerating Gemma 4: faster inference with multi-token prediction drafters
amrrs · Hacker News · 4d ago
OpenAI’s o1 correctly diagnosed 67% of ER patients vs. 50-55% by triage doctors
donsupreme · Hacker News · 6d ago
ProgramBench: Can language models rebuild programs from scratch?
jonbaer · Hacker News · 2d ago
ZAYA1-8B matches DeepSeek-R1 on math with less than 1B active parameters
steveharing1 · Hacker News · 2d ago
A couple million lines of Haskell: Production engineering at Mercury
unignorant · Hacker News · 6d ago