AbstractClassical global convergence results for first-order methods rely on uniform smoothness and the \L{}ojasiewicz inequality. Motivated by properties of objective functions that arise in
machine learning, we propose a non-uniform refinement of these notions, leading to \emph{
→