Notes on the Origin of Implicit Regularization in SGD
I wanted to highlight an intriguing paper I presented at a journal club recently:Samuel L Smith, Benoit Dherin, David Barrett, Soham De (2021) On the Origin of Implicit Regularization in Stochastic Gradient DescentThere's actually a related paper that came out simultaneously, studying full-batch gradient descent instead of SGD:David G.T. Barrett, Benoit Dherin (2021) Implicit Gradient RegularizationOne of the most important insights in machine learning over the past few years relates to the...
Read full article →