Abstract
A central challenge to using first-order methods for optimizing nonconvex problems is the presence of saddle points. First-order methods often get stuck at saddle points, greatly deteriorating their performance. Typically, to escape from saddles one has to use second-order methods. However, most works on second-order methods rely extensively on expensive Hessian-based computations, making them impractical in large-scale settings. To tackle this challenge, we introduce a generic framework that minimizes Hessian-based computations while at the same time provably converging to second- order critical points. Our framework carefully alternates between a first-order and a second-order subroutine, using the latter only close to saddle points, and yields convergence results competitive to the state-of-the-art. Empirical results suggest that our strategy also enjoys a good practical performance.
Original language | English |
---|---|
Pages | 1233-1242 |
Number of pages | 10 |
State | Published - 2018 |
Externally published | Yes |
Event | 21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 - Playa Blanca, Lanzarote, Canary Islands, Spain Duration: 9 Apr 2018 → 11 Apr 2018 |
Conference
Conference | 21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018 |
---|---|
Country/Territory | Spain |
City | Playa Blanca, Lanzarote, Canary Islands |
Period | 9/04/18 → 11/04/18 |