نتایج جستجو برای: fuzzy subgradient

تعداد نتایج: 90857  

Journal: :CoRR 2017
Thomas Holding Ioannis Lestas

In part I we considered the problem of convergence to a saddle point of a concave-convex function via gradient dynamics and an exact characterization was given to their asymptotic behaviour. In part II we consider a general class of subgradient dynamics that provide a restriction in an arbitrary convex domain. We show that despite the nonlinear and nonsmooth character of these dynamics their ω-...

Journal: :Oper. Res. Lett. 2000
Hanif D. Sherali Gyunghyun Choi Cihan H. Tuncbilek

This paper presents a new Variable target value method (VTVM) that can be used in conjunction with pure or de ected subgradient strategies. The proposed procedure assumes no a priori knowledge regarding bounds on the optimal value. The target values are updated iteratively whenever necessary, depending on the information obtained in the process of the algorithm. Moreover, convergence of the seq...

2003
Júlíus Atlason Marina A. Epelman Shane G. Henderson

We study the problem of approximating a subgradient of a convex (or concave) discrete function that is evaluated via simulation. This problem arises, for instance, in optimization problems such as finding the minimal cost staff schedule in a call center subject to a service level constraint. There, subgradient information can be used to significantly reduce the search space. The problem of appr...

2006
Kevin Yuen Baochun Li Ben Liang

In this paper, we propose an effective distributed algorithm to solve the minimum energy data gathering (MEDG) problem in sensor networks with multiple sinks. The problem objective is to find a rate allocation on the sensor nodes and a transmission structure on the network graph, such that the data collected by the sink nodes can reproduce the field of observation, and the total energy consumed...

1996
R. TYRRELL ROCKAFELLAR

Much effort in recent years has gone into generalizing the classical Hamiltonian and Euler-Lagrange equations of the calculus of variations so as to encompass problems in optimal control and a greater variety of integrands and constraints. These generalizations, in which nonsmoothness abounds and gradients are systematically replaced by subgradients, have succeeded in furnishing necessary condi...

Journal: :SIAM Journal on Optimization 2017
Hao Yu Michael J. Neely

This paper considers convex programs with a general (possibly non-differentiable) convex objective function and Lipschitz continuous convex inequality constraint functions. A simple algorithm is developed and achieves an O(1/t) convergence rate. Similar to the classical dual subgradient algorithm and the ADMM algorithm, the new algorithm has a parallel implementation when the objective and cons...

2017
Q-L Dong A Gibali D Jiang Y Tang

In this paper we study the bounded perturbation resilience of the extragradient and the subgradient extragradient methods for solving a variational inequality (VI) problem in real Hilbert spaces. This is an important property of algorithms which guarantees the convergence of the scheme under summable errors, meaning that an inexact version of the methods can also be considered. Moreover, once a...

2006
B. Goldengorin J. Keane V. Kuzmenko Boris Goldengorin John Keane Viktor Kuzmenko

This paper investigates a model for pricing the demand for a set of goods when multiple suppliers operate discount schedules based on total business value. We formulate the buyers’s decision problem as a mixed binary integer program (MIP) which is a generalization of the capacitated facility location problem (CFLP) and can be solved using Lagrangean heuristics. We have investigated commercially...

1998
M. V. SOLODOV S. K. ZAVRIEV Z. Q. Luo

We present a unified framework for convergence analysis of generalized subgradient-type algorithms in the presence of perturbations. A principal novel feature of our analysis is that perturbations need not tend to zero in the limit. It is established that the iterates of the algorithms are attracted, in a certain sense, to an e-stationary set of the problem, where e depends on the magnitude of ...

Journal: :SIAM Journal on Optimization 2016
James Renegar

A subgradient method is presented for solving general convex optimization problems, the main requirement being that a strictly-feasible point is known. A feasible sequence of iterates is generated, which converges to within user-specified error of optimality. Feasibility is maintained with a linesearch at each iteration, avoiding the need for orthogonal projections onto the feasible region (an ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید