نتایج جستجو برای: fuzzy subgradient

تعداد نتایج: 90857  

Journal: :CoRR 2017
Patrick R. Johnstone Pierre Moulin

The purpose of this manuscript is to derive new convergence results for several subgradient methods for minimizing nonsmooth convex functions with Hölderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and functions with weakly sharp minima as special cases. To this end there are four main contributions. First, for a constant and su...

1999
Luiz Antonio N. Lorena Marcelo Gonçalves Narciso

The Traveling Salesman Problem (TSP) is a classical Combinatorial Optimization problem intensively studied. The Lagrangean relaxation was first applied to the TSP in 1970. The Lagrangean relaxation limit approximates what is known today as HK (Held and Karp) bound, a very good bound (less than 1% from optimal) for a large class of symmetric instances. It became a reference bound for new heurist...

2016
Li Xiao Junjie Bao Xi Shi

In this paper, we present an improved subgradient algorithm for solving a general multi-agent convex optimization problem in a distributed way, where the agents are to jointly minimize a global objective function subject to a global inequality constraint, a global equality constraint and a global constraint set. The global objective function is a combination of local agent objective functions a...

Journal: :Computational Optimization and Applications 2008

Journal: :IEEE Control Systems Letters 2022

In this letter we consider a distributed stochastic optimization framework in which agents network aim to cooperatively learn an optimal network-wide policy. The goal is compute local functions minimize the expected value of given cost, subject individual constraints and average coupling constraints. order handle challenges context, resort Lagrangian duality approach that allows us derive assoc...

Journal: :SIAM Journal on Optimization 2014
Angelia Nedic Soomin Lee

This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...

2014
Masoud Ahookhosh

This study addresses some algorithms for solving structured unconstrained convex optimization problems using first-order information where the underlying function includes high-dimensional data. The primary aim is to develop an implementable algorithmic framework for solving problems with multiterm composite objective functions involving linear mappings using the optimal subgradient algorithm, ...

Journal: :CoRR 2016
Yi Xu Qihang Lin Tianbao Yang

In this paper, we propose two accelerated stochastic subgradient methods for stochastic non-strongly convex optimization problems by leveraging a generic local error bound condition. The novelty of the proposed methods lies at smartly leveraging the recent historical solution to tackle the variance in the stochastic subgradient. The key idea of both methods is to iteratively solve the original ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید