نتایج جستجو برای: fuzzy subgradient

تعداد نتایج: 90857  

Journal: :CoRR 2015
Jinshan Zeng Wotao Yin

In this note, we extend the existing algorithms Extra [13] and subgradient-push [10] to a new algorithm ExtraPush for convex consensus optimization over a directed network. When the network is stationary, we propose a simplified algorithm called Normalized ExtraPush. These algorithms use a fixed step size like in Extra and accept the column-stochastic mixing matrices like in subgradient-push. W...

Journal: :CoRR 2017
Yang Yang Marius Pesavento

In this paper, we propose a convergent parallel best-response algorithm with the exact line search for the nondifferentiable nonconvex sparsity-regularized rank minimization problem. On the one hand, it exhibits a faster convergence than subgradient algorithms and block coordinate descent algorithms. On the other hand, its convergence to a stationary point is guaranteed, while ADMM algorithms o...

2007
R. T. Rockafellar

Proximal mappings, which generalize projection mappings, were introduced by Moreau and shown to be valuable in understanding the subgradient properties of convex functions. Proximal mappings subsequently turned out to be important also in numerical methods of optimization and the solution of nonlinear partial differential equations and variational inequalities. Here it is shown that, when a con...

2013
Mengjie Han

The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton’s method and can be applied to a wider variety of problems. It also converges when the objective function is nondifferentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorith...

2013
Simon Lacoste-Julien Martin Jaggi Mark W. Schmidt Patrick Pletscher

We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves a similar convergence rate in duality gap as the full FrankWolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this yields an online a...

2010
Michel Baes Michael Buergisser

We show that the Hedge Algorithm, a method widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a well-chosen convex function, namely a Mirror Descent Scheme. Using this reformulation, we can improve slightly the worstcase convergence guarantees of the Hedge Algorithm. Recently, Nesterov has introduced the class of Primal-Dual Subgradient Algo...

2004
Giovanni Colombo Peter R. Wolenski

We derive a formula for the minimal time function where the dynamics are linear and the target is convex. Based on this formula, we give a new proof of the semiconvexity of the minimal time function, a result originally due to Cannarsa and Sinestrari.

Journal: :Filomat 2022

In this paper, we introduce projective inertial parallel subgradient extragradient-line algorithm for solving variational inequalites of L-Lipschitz continuous and monotone mappings which L is unknown. We prove a strong convergence result under some mild conditions in Hilbert space. also present numerical examples Euclidean space R3 compared with Parallel-Viscosity-Type Subgradient Extragradien...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید