نتایج جستجو برای: nonsmooth convex optimization problem

تعداد نتایج: 1134849  

2015
Qi Deng Guanghui Lan Anand Rangarajan

Block coordinate descent methods and stochastic subgradient methods have been extensively studied in optimization and machine learning. By combining randomized block sampling with stochastic subgradient methods based on dual averaging ([22, 36]), we present stochastic block dual averaging (SBDA)—a novel class of block subgradient methods for convex nonsmooth and stochastic optimization. SBDA re...

In this paper, using Clarke’s generalized directional derivative and dI-invexity we introduce new concepts of nonsmooth K-α-dI-invex and generalized type I univex functions over cones for a nonsmooth vector optimization problem with cone constraints. We obtain some sufficient optimality conditions and Mond-Weir type duality results under the foresaid generalized invexity and type I cone-univexi...

Journal: :SIAM Journal on Optimization 2005
Yurii Nesterov

In this paper we introduce a new primal-dual technique for convergence analysis of gradient schemes for nonsmooth convex optimization. As an example of its application, we derive a primal-dual gradient method for a special class of structured nonsmooth optimization problems, which ensures a rate of convergence of order O( 1 k ), where k is the iteration count. Another example is a gradient sche...

2012
Nadja Harms Christian Kanzow Oliver Stein

This article studies differentiability properties for a reformulation of a player convex generalized Nash equilibrium problem as a constrained and possibly nonsmooth minimization problem. By using several results from parametric optimization we show that, apart from exceptional cases, all locally minimal points of the reformulation are differentiability points of the objective function. This ju...

Journal: :J. Optimization Theory and Applications 2016
Duy V. N. Luong Panos Parpas Daniel Rueckert Berç Rustem

Large scale nonsmooth convex optimization is a common problem for a range of computational areas including machine learning and computer vision. Problems in these areas contain special domain structures and characteristics. Special treatment of such problem domains, exploiting their structures, can significantly improve the computational burden. We present a weighted Mirror Descent method to so...

Journal: :Optimization Methods and Software 2015
Heinz H. Bauschke Warren Hare Walaa M. Moursi

We consider the minimization of a nonsmooth convex function over a compact convex set subject to a nonsmooth convex constraint. We work in the setting of derivative-free optimization (DFO), assuming that the objective and constraint functions are available through a black-box that provides function values for lower-C2 representation of the functions. Our approach is based on a DFO adaptation of...

Journal: :J. Optimization Theory and Applications 2014
Shashi Kant Mishra B. B. Upadhyay Le Thi Hoai An

This paper deals with the minimization of a class of nonsmooth pseudolinear functions over a closed and convex set subject to linear inequality constraints. We establish several Lagrange multiplier characterizations of the solution set of the minimization problem by using the properties of locally Lipschitz pseudolinear functions. We also consider a constrained nonsmooth vector pseudolinear opt...

Journal: :J. Computational Applied Mathematics 2014
Gonglin Yuan Zengxin Wei Guoyin Li

The conjugate gradient (CG) method is one of the most popular methods for solving smooth unconstrained optimization problems due to its simplicity and low memory requirement. However, the usage of CG methods are mainly restricted in solving smooth optimization problems so far. The purpose of this paper is to present efficient conjugate gradient-type methods to solve nonsmooth optimization probl...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید