نتایج جستجو برای: nonsmooth convex optimization problem

تعداد نتایج: 1134849  

2014
Samaneh Azadi Suvrit Sra

We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates. This paper presents two new SADMM method...

Journal: :Numerical Algorithms 2022

This paper considers a stochastic optimization problem over the fixed point sets of quasinonexpansive mappings on Riemannian manifolds. The enables us to consider hierarchical problems complicated sets, such as intersection many closed convex set all minimizers nonsmooth function, and sublevel functions. We focus adaptive learning rate algorithms, which adapt step-sizes (referred rates in machi...

Journal: :EURO J. Computational Optimization 2015
Martin Schmidt

Many real-world optimization models comprise nonconvex and nonlinear as well as nonsmooth functions leading to very hard classes of optimization models. In this article a new interior-point method for the special but practically relevant class of optimization problems with locatable and separable nonsmooth aspects is presented. After motivating and formalizing the problems under consideration, ...

H. Dehghani J. Vakili,

Computing the exact ideal and nadir criterion values is a very ‎important subject in ‎multi-‎objective linear programming (MOLP) ‎problems‎‎. In fact‎, ‎these values define the ideal and nadir points as lower and ‎upper bounds on the nondominated points‎. ‎Whereas determining the ‎ideal point is an easy work‎, ‎because it is equivalent to optimize a ‎convex function (linear function) over a con...

Journal: :Automatica 2016
Laurent Bako Henrik Ohlsson

In this paper, we consider the problem of identifying a linear map from measurements which are subject to intermittent and arbitarily large errors. This is a fundamental problem in many estimation-related applications such as fault detection, state estimation in lossy networks, hybrid system identification, robust estimation, etc. The problem is hard because it exhibits some intrinsic combinato...

2016
Quanming Yao James T. Kwok

Learning of low-rank matrices is fundamental to many machine learning applications. A state-ofthe-art algorithm is the rank-one matrix pursuit (R1MP). However, it can only be used in matrix completion problems with the square loss. In this paper, we develop a more flexible greedy algorithm for generalized low-rank models whose optimization objective can be smooth or nonsmooth, general convex or...

Journal: :CoRR 2016
Quanming Yao James T. Kwok

Learning of low-rank matrices is fundamental to many machine learning applications. A state-of-the-art algorithm is the rank-one matrix pursuit (R1MP). However, it can only be used in matrix completion problems with the square loss. In this paper, we develop a more flexible greedy algorithm for generalized low-rank models whose optimization objective can be smooth or nonsmooth, general convex o...

Journal: :J. Global Optimization 2007
Fabián Flores Bazán Nicolas Hadjisavvas Cristian Vera

Given a closed convex cone P with nonempty interior in a locally convex vector space, and a set A ⊂ Y , we provide various equivalences to the implication A ∩ (−int P ) = ∅ =⇒ co(A) ∩ (−int P ) = ∅, among them, to the pointedness of cone(A + int P ). This allows us to establish an optimal alternative theorem, suitable for vector optimization problems. In addition, we characterize the two-dimens...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید