نتایج جستجو برای: mollifier subgradient

تعداد نتایج: 1200  

2018
Damek Davis Dmitriy Drusvyatskiy

In the recent paper [3], it was shown that the stochastic subgradient method applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate O(k−1/4). In this supplementary note, we present a stochastic subgradient method for minimizing a convex function, with the improved rate Õ(k−1/2).

Journal: :Journal of Approximation Theory 1987

2014
Mikhail A. Bragin Peter B. Luh Joseph H. Yan Gary A. Stern

Mikhail A. Bragin • Peter B. Luh • Joseph H. Yan • Nanpeng Yu • Gary A. Stern Communicated by Fabián Flores-Bazàn Abstract Studies have shown that the surrogate subgradient method, to optimize non-smooth dual functions within the Lagrangian relaxation framework, can lead to significant computational improvements as compared to the subgradient method. The key idea is to obtain surrogate subgradi...

2015
Xian Qian Yang Liu

Given a set of basic binary features, we propose a new L1 norm SVM based feature selection method that explicitly selects the features in their polynomial or tree kernel spaces. The efficiency comes from the anti-monotone property of the subgradients: the subgradient with respect to a combined feature can be bounded by the subgradient with respect to each of its component features, and a featur...

Journal: :European Journal of Operational Research 2015
Yaohua Hu Xiaoqi Yang Chee-Khian Sim

In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient ...

Journal: :CoRR 2017
Damek Davis Benjamin Grimmer

In this paper, we introduce a stochastic projected subgradient method for weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions—a wide class of functions which includes the additive and convex composite classes. At a high-level, the method is an inexact proximal point iteration in which the strongly convex proximal subproblems are quickly solved with a specialized stochast...

2012
Ion Matei John S. Baras

In this paper we address the problem of multi-agent optimization for convex functions expressible as sums of convex functions. Each agent has access to only one function in the sum and can use only local information to update its current estimate of the optimal solution. We consider two consensus-based iterative algorithms, based on a combination between a consensus step and a subgradient decen...

Journal: :European Journal of Operational Research 2020

2010
John Duchi Elad Hazan

We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradientbased learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and on...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید