The Evolving Landscape of LLM Evaluation
Edit, May 16: Added mention of Benchmarking Benchmark Leakage in Large Language Models (Xu et al. , 2024).Throughout recent years, LLM capabilities have outpaced evaluation benchmarks. This is not a new development.1 The set of canonical LLM evals has further narrowed to a small set of benchmarks such as MMLU for general natural language understanding, GMS8k for mathematical reasoning, and HumanEval for code, among others. Recently, concerns regarding the reliability of even this small set of be...
Read full article →