The other paper that killed deep learning theory

·Alignment Forum··

Yesterday, I wrote about the state of deep learning theory circa 2016,[1] as well as the bombshell 2016 paper by Zhang et al. that arguably signaled its demise. Today, I cover the aftermath, and the 2019 paper that devastated deep learning theory again. As a brief summary, I argued that the rise of deep learning posed an existential challenge to the dominant theoretical paradigm of statistical learning theory, because neural networks have a lot of complexity. The response from the field was to a...

Read full article →

Related Articles

Welfare Biology and AI: The Psychopath, the Nematode, and the Arahant
Dawn Drescher · EA Forum · 4d ago
Immigration changes are driving foreign researchers to leave the U.S. — or not come to begin with 
Andrew Joseph · STAT News · 4d ago
Models Recall What They Violate: Constraint Adherence in Multi-Turn LLM Ideation
Garvin Kruthof · ArXiv cs.AI · 4d ago
Looking for papers on general formalizations of "agency"
lovagrus · LessWrong · 5d ago
SFF’s HSEE grant round; human intelligence amplification projects I’d like to see by TsviBT
TsviBT · Nuno Sempere · 8d ago