Proximal Methods Avoid Active Strict Saddles of Weakly Convex Functions

نویسندگان

چکیده

We introduce a geometrically transparent strict saddle property for nonsmooth functions. This guarantees that simple proximal algorithms on weakly convex problems converge only to local minimizers, when randomly initialized. argue the may be realistic assumption in applications, since it provably holds generic semi-algebraic optimization problems.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Proximal Quasi-Newton Methods for Convex Optimization

In [19], a general, inexact, e cient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of t...

متن کامل

Proximal Newton-type methods for convex optimization

We seek to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such met...

متن کامل

Convex risk minimization via proximal splitting methods

In this paper we investigate the applicability of a recently introduced primal-dual splitting method in the context of solving portfolio optimization problems which assume the minimization of risk measures associated to different convex utility functions. We show that, due to the splitting characteristic of the used primal-dual method, the main effort in implementing it constitutes in the calcu...

متن کامل

Stochastic model-based minimization of weakly convex functions

We consider an algorithm that successively samples and minimizes stochastic models of the objective function. We show that under weak-convexity and Lipschitz conditions, the algorithm drives the expected norm of the gradient of the Moreau envelope to zero at the rate O(k−1/4). Our result yields the first complexity guarantees for the stochastic proximal point algorithm on weakly convex problems...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Foundations of Computational Mathematics

سال: 2021

ISSN: ['1615-3383', '1615-3375']

DOI: https://doi.org/10.1007/s10208-021-09516-w