A globally convergent proximal Newton-type method in nonsmooth convex optimization

نویسندگان

چکیده

The paper proposes and justifies a new algorithm of the proximal Newton type to solve broad class nonsmooth composite convex optimization problems without strong convexity assumptions. Based on advanced notions techniques variational analysis, we establish implementable results global convergence proposed as well its local with superlinear quadratic rates. For certain structured problems, obtained conditions do not require Lipschitz continuity corresponding Hessian mappings that is crucial assumption used in literature ensure other algorithms type. conducted numerical experiments solving \(l_1\) regularized logistic regression model illustrate possibility applying deal practically important problems.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Quasi-Newton Approach to Nonsmooth Convex Optimization A Quasi-Newton Approach to Nonsmooth Convex Optimization

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...

متن کامل

Proximal Newton-type methods for convex optimization

We seek to solve convex optimization problems in composite form: minimize x∈Rn f(x) := g(x) + h(x), where g is convex and continuously differentiable and h : R → R is a convex but not necessarily differentiable function whose proximal mapping can be evaluated efficiently. We derive a generalization of Newton-type methods to handle such convex but nonsmooth objective functions. We prove such met...

متن کامل

A globally convergent incremental Newton method

Motivated by machine learning problems over large data sets and distributed optimization over networks, we develop and analyze a new method called incremental Newton method for minimizing the sum of a large number of strongly convex functions. We show that our method is globally convergent for a variable stepsize rule. We further show that under a gradient growth condition, convergence rate is ...

متن کامل

A Globally Convergent LP-Newton Method

We develop a globally convergent algorithm based on the LP-Newton method, which has been recently proposed for solving constrained equations, possibly nonsmooth and possibly with nonisolated solutions. The new algorithm makes use of linesearch for the natural merit function and preserves the strong local convergence properties of the original LP-Newton scheme. We also present computational expe...

متن کامل

A preconditioning proximal Newton method for nondifferentiable convex optimization

We propose a proximal Newton method for solving nondiieren-tiable convex optimization. This method combines the generalized Newton method with Rockafellar's proximal point algorithm. At each step, the proximal point is found approximately and the regu-larization matrix is preconditioned to overcome inexactness of this approximation. We show that such a preconditioning is possible within some ac...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2022

ISSN: ['0025-5610', '1436-4646']

DOI: https://doi.org/10.1007/s10107-022-01797-5