نتایج جستجو برای: mollifier subgradient
تعداد نتایج: 1200 فیلتر نتایج به سال:
In the recent paper [3], it was shown that the stochastic subgradient method applied to a weakly convex problem, drives the gradient of the Moreau envelope to zero at the rate O(k−1/4). In this supplementary note, we present a stochastic subgradient method for minimizing a convex function, with the improved rate Õ(k−1/2).
Mikhail A. Bragin • Peter B. Luh • Joseph H. Yan • Nanpeng Yu • Gary A. Stern Communicated by Fabián Flores-Bazàn Abstract Studies have shown that the surrogate subgradient method, to optimize non-smooth dual functions within the Lagrangian relaxation framework, can lead to significant computational improvements as compared to the subgradient method. The key idea is to obtain surrogate subgradi...
Given a set of basic binary features, we propose a new L1 norm SVM based feature selection method that explicitly selects the features in their polynomial or tree kernel spaces. The efficiency comes from the anti-monotone property of the subgradients: the subgradient with respect to a combined feature can be bounded by the subgradient with respect to each of its component features, and a featur...
In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient ...
In this paper, we introduce a stochastic projected subgradient method for weakly convex (i.e., uniformly prox-regular) nonsmooth, nonconvex functions—a wide class of functions which includes the additive and convex composite classes. At a high-level, the method is an inexact proximal point iteration in which the strongly convex proximal subproblems are quickly solved with a specialized stochast...
In this paper we address the problem of multi-agent optimization for convex functions expressible as sums of convex functions. Each agent has access to only one function in the sum and can use only local information to update its current estimate of the optimal solution. We consider two consensus-based iterative algorithms, based on a combination between a consensus step and a subgradient decen...
We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradientbased learning. Metaphorically, the adaptation allows us to find needles in haystacks in the form of very predictive but rarely seen features. Our paradigm stems from recent advances in stochastic optimization and on...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید