Understanding the Unstable Convergence of Gradient Descent

Kwangjun Ahn, Jingzhao Zhang, Suvrit Sra

Research output: Contribution to journalConference articlepeer-review

11 Scopus citations

Abstract

Most existing analyses of (stochastic) gradient descent rely on the condition that for L-smooth costs, the step size is less than 2/L. However, many works have observed that in machine learning applications step sizes often do not fulfill this condition, yet (stochastic) gradient descent still converges, albeit in an unstable manner. We investigate this unstable convergence phenomenon from first principles, and discuss key causes behind it. We also identify its main characteristics, and how they interrelate based on both theory and experiments, offering a principled view toward understanding the phenomenon.

Original languageEnglish
Pages (from-to)247-257
Number of pages11
JournalProceedings of Machine Learning Research
Volume162
StatePublished - 2022
Externally publishedYes
Event39th International Conference on Machine Learning, ICML 2022 - Baltimore, United States
Duration: 17 Jul 202223 Jul 2022

Fingerprint

Dive into the research topics of 'Understanding the Unstable Convergence of Gradient Descent'. Together they form a unique fingerprint.

Cite this