نتایج جستجو برای: mollifier subgradient

تعداد نتایج: 1200  

Journal: :CoRR 2015
Wenbo Hu Jun Zhu Bo Zhang

Many Bayesian models involve continuous but non-differentiable log-posteriors, including the sparse Bayesian methods with a Laplace prior and the regularized Bayesian methods with maxmargin posterior regularization that acts like a likelihood term. In analogy to the popular stochastic subgradient methods for deterministic optimization, we present the stochastic subgradient MCMC for efficient po...

Journal: :Math. Meth. of OR 2008
Adil M. Bagirov Asef Nazari Ganjehlou

In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the propo...

2009
LE YANG

In this paper, we define the geometric median of a probability measure on a Riemannian manifold, give its characterization and a natural condition to ensure its uniqueness. In order to calculate the median in practical cases, we also propose a subgradient algorithm and prove its convergence as well as estimating the error of approximation and the rate of convergence. The convergence property of...

Journal: :J. Optimization Theory and Applications 2014
José Yunier Bello Cruz R. Díaz Millán

We propose a direct splitting method for solving nonsmooth variational inequality problems in Hilbert spaces. The weak convergence is established, when the operator is the sum of two point-to-set and monotone operators. The proposed method is a natural extension of the incremental subgradient method for nondifferentiable optimization, which explores strongly the structure of the operator using ...

Journal: :CoRR 2012
Simon Lacoste-Julien Mark W. Schmidt Francis R. Bach

In this note, we present a new averaging technique for the projected stochastic subgradient method. By using a weighted average with a weight of t + 1 for each iterate wt at iteration t, we obtain the convergence rate of O(1/t) with both an easy proof and an easy implementation. The new scheme is compared empirically to existing techniques, with similar performance behavior.

2015
J Frédéric Bonnans

1. Exam program 1 2. Lesson 1 (Jan. 20, 2015): Convex functions 2 3. Lesson 2 (Feb. 3, 2015): Convex duality 2 4. Lesson 3 (Feb. 10, 2015): Subgradient of expectations I 2 5. Lesson 4 (Feb. 17, 2015): Subgradient of expectations II 3 6. Lesson 5 (March 3, 2015) 3 7. Lesson 6 (March 17, 2015) 3 8. Lesson 7 (March 24, 2015) 3 9. Lesson 8 (last lesson: March 31, 2015) 3 10. Additional material 3 1...

2007
Nathan D. Ratliff J. Andrew Bagnell Martin A. Zinkevich

Promising approaches to structured learning problems have recently been developed in the maximum margin framework. Unfortunately, algorithms that are computationally and memory efficient enough to solve large scale problems have lagged behind. We propose using simple subgradient-based techniques for optimizing a regularized risk formulation of these problems in both online and batch settings, a...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید