نتایج جستجو برای: fuzzy subgradient
تعداد نتایج: 90857 فیلتر نتایج به سال:
Abstract We consider the minimization of a sum Pm i=1 fi(x) consisting of a large number of convex component functions fi. For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gra...
Many Bayesian models involve continuous but non-differentiable log-posteriors, including the sparse Bayesian methods with a Laplace prior and the regularized Bayesian methods with maxmargin posterior regularization that acts like a likelihood term. In analogy to the popular stochastic subgradient methods for deterministic optimization, we present the stochastic subgradient MCMC for efficient po...
In this paper a new algorithm for minimizing locally Lipschitz functions is developed. Descent directions in this algorithm are computed by solving a system of linear inequalities. The convergence of the algorithm is proved for quasidifferentiable semismooth functions. We present the results of numerical experiments with both regular and nonregular objective functions. We also compare the propo...
In this paper, we define the geometric median of a probability measure on a Riemannian manifold, give its characterization and a natural condition to ensure its uniqueness. In order to calculate the median in practical cases, we also propose a subgradient algorithm and prove its convergence as well as estimating the error of approximation and the rate of convergence. The convergence property of...
We propose a direct splitting method for solving nonsmooth variational inequality problems in Hilbert spaces. The weak convergence is established, when the operator is the sum of two point-to-set and monotone operators. The proposed method is a natural extension of the incremental subgradient method for nondifferentiable optimization, which explores strongly the structure of the operator using ...
In this note, we present a new averaging technique for the projected stochastic subgradient method. By using a weighted average with a weight of t + 1 for each iterate wt at iteration t, we obtain the convergence rate of O(1/t) with both an easy proof and an easy implementation. The new scheme is compared empirically to existing techniques, with similar performance behavior.
1. Exam program 1 2. Lesson 1 (Jan. 20, 2015): Convex functions 2 3. Lesson 2 (Feb. 3, 2015): Convex duality 2 4. Lesson 3 (Feb. 10, 2015): Subgradient of expectations I 2 5. Lesson 4 (Feb. 17, 2015): Subgradient of expectations II 3 6. Lesson 5 (March 3, 2015) 3 7. Lesson 6 (March 17, 2015) 3 8. Lesson 7 (March 24, 2015) 3 9. Lesson 8 (last lesson: March 31, 2015) 3 10. Additional material 3 1...
Promising approaches to structured learning problems have recently been developed in the maximum margin framework. Unfortunately, algorithms that are computationally and memory efficient enough to solve large scale problems have lagged behind. We propose using simple subgradient-based techniques for optimizing a regularized risk formulation of these problems in both online and batch settings, a...
In optimal short-term resource scheduling decomposition and coordination methods by Lagrangean relaxation are known as convenient to handle resource-specific constraints and for their ability to provide duality gap estimates. They will be used, in context of subgradient optimization, to deal with this problem. One cause of erratic behavior often encountered in subgradient optimization is associ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید