نتایج جستجو برای: nonsmooth convex optimization problem

تعداد نتایج: 1134849  

2017
David Balduzzi Brian McWilliams Tony Butler-Yeoman

Modern convolutional networks, incorporating rectifiers and max-pooling, are neither smooth nor convex; standard guarantees therefore do not apply. Nevertheless, methods from convex optimization such as gradient descent and Adam are widely used as building blocks for deep learning algorithms. This paper provides the first convergence guarantee applicable to modern convnets, which furthermore ma...

1995
Michael Patriksson

Subgradient methods are popular tools for nonsmooth, convex minimization , especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions, since the subgradients used i...

2011
Weijun Zhou

A hybrid HS and PRP type conjugate gradient method for smooth optimization is presented, which reduces to the classical RPR or HS method if exact linear search is used and converges globally and R-linearly for nonconvex functions with an inexact backtracking line search under standard assumption. An inexact version of the proposed method which admits possible approximate gradient or/and approxi...

Journal: :Foundations of Computational Mathematics 2021

We introduce a geometrically transparent strict saddle property for nonsmooth functions. This guarantees that simple proximal algorithms on weakly convex problems converge only to local minimizers, when randomly initialized. argue the may be realistic assumption in applications, since it provably holds generic semi-algebraic optimization problems.

2013
JÉRÔME MALICK WELINGTON OLIVEIRA

We consider convex nonsmooth optimization problems whose objective function is known through a (fine) oracle together with some additional (cheap but poor) information – formalized as a second coarse oracle wih uncontrolled inexactness. It is the case when the objective function is itself the output of an optimization solver, using a branch-and-bound procedure, or decomposing the problem into p...

Journal: :SIAM Journal on Optimization 2009
Aris Daniilidis Claudia A. Sagastizábal Mikhail V. Solodov

We consider the problem of minimizing nonsmooth convex functions, defined piecewise by a finite number of functions each of which is either convex quadratic or twice continuously differentiable with positive definite Hessian on the set of interest. This is a particular case of functions with primal-dual gradient structure, a notion closely related to the so-called VU space decomposition: at a g...

Journal: :Math. Program. 2009
Yurii Nesterov

In this paper we present a new approach for constructing subgradient schemes for different types of nonsmooth problems with convex structure. Our methods are primaldual since they are always able to generate a feasible approximation to the optimum of an appropriately formulated dual problem. Besides other advantages, this useful feature provides the methods with a reliable stopping criterion. T...

2015
Sashank J. Reddi Ahmed Hefny Carlton Downey Avinava Dubey Suvrit Sra

We develop randomized block coordinate descent (CD) methods for linearly constrained convex optimization. Unlike other large-scale CD methods, we do not assume the constraints to be separable, but allow them be coupled linearly. To our knowledge, ours is the first CD method that allows linear coupling constraints, without making the global iteration complexity have an exponential dependence on ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید