نتایج جستجو برای: nonsmooth convex optimization problem

تعداد نتایج: 1134849  

2007
Vinay Kumar Paul I. Barton

A new approach is proposed for finding all solutions of systems of nonlinear equations with bound constraints. The zero finding problem is converted to a global optimization problem whose global minima with zero objective value, if any, correspond to all solutions of the initial problem. A branch-and-bound algorithm is used with McCormick’s nonsmooth convex relaxations to generate lower bounds....

Journal: :Mathematics 2022

We take up a nonsmooth multiobjective optimization problem with tangentially convex objective and constraint functions. In employing suitable qualification, we formulate both necessary sufficient optimality conditions for (local) quasi efficient solutions in terms of tangential subdifferentials. Furthermore, under generalized convexity assumptions, state strong, weak converse duality relations ...

Journal: :Math. Program. 2015
Yurii Nesterov

In this paper, we present new methods for black-box convex minimization. They do not need to know in advance the actual level of smoothness of the objective function. Their only essential input parameter is the required accuracy of the solution. At the same time, for each particular problem class they automatically ensure the best possible rate of convergence. We confirm our theoretical results...

2013
Hua Ouyang Niao He Long Tran Alexander G. Gray

The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies...

Journal: :CoRR 2017
Qinghua Liu Xinyue Shen Yuantao Gu

Linearized alternating direction method of multipliers (ADMM) as an extension of ADMM has been widely used to solve linearly constrained problems in signal processing, machine leaning, communications, and many other fields. Despite its broad applications in non-convex optimization, for a great number of non-convex and non-smooth objective functions, its theoretical convergence guarantee is stil...

Journal: :Axioms 2023

This paper is devoted to the investigation of optimality conditions and saddle point theorems for robust approximate quasi-weak efficient solutions a nonsmooth uncertain multiobjective fractional semi-infinite optimization problem (NUMFP). Firstly, necessary condition established by using properties Gerstewitz’s function. Furthermore, kind pseudo/quasi-convex function defined (NUMFP), under its...

Journal: :Axioms 2022

The nonconvex and nonsmooth optimization problem has been attracting increasing attention in recent years image processing machine learning research. algorithm-based reweighted step widely used many applications. In this paper, we propose a new, extended version of the iterative convex majorization–minimization method (ICMM) for solving minimization problem, which involves famous methods. To pr...

Journal: :Soft Computing 2022

Abstract In this paper, the convex nonsmooth optimization problem with fuzzy objective function and both inequality equality constraints is considered. The Karush–Kuhn–Tucker necessary optimality conditions are proved for such a extremum problem. Further, exact $$l_{1}$$ l 1 </mml:...

2013
CONG D. DANG

Abstract. In this paper, we present a new stochastic algorithm, namely the stochastic block mirror descent (SBMD) method for solving large-scale nonsmooth and stochastic optimization problems. The basic idea of this algorithm is to incorporate the block-coordinate decomposition and an incremental block averaging scheme into the classic (stochastic) mirror-descent method, in order to significant...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید