A Markovian Incremental Stochastic Subgradient Algorithm

نویسندگان

چکیده

In this article, a stochastic incremental subgradient algorithm for the minimization of sum convex functions is introduced. The method sequentially uses partial information, and sequence subgradients determined by general Markov chain. This makes it suitable to be used in networks, where path information flow stochastically selected. We prove convergence weighted objective function, weights are given Cesàro limiting probability distribution Unlike previous works literature, (not necessarily uniform), allowing flexibility method.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Incremental Stochastic Subgradient Algorithms for Convex Optimization

This paper studies the effect of stochastic errors on two constrained incremental subgradient algorithms. The incremental subgradient algorithms are viewed as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. First, the standard cyclic incremental subgradient algorit...

متن کامل

On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging

This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...

متن کامل

Stochastic Subgradient Methods

Stochastic subgradient methods play an important role in machine learning. We introduced the concepts of subgradient methods and stochastic subgradient methods in this project, discussed their convergence conditions as well as the strong and weak points against their competitors. We demonstrated the application of (stochastic) subgradient methods to machine learning with a running example of tr...

متن کامل

Stochastic Subgradient MCMC Methods

Many Bayesian models involve continuous but non-differentiable log-posteriors, including the sparse Bayesian methods with a Laplace prior and the regularized Bayesian methods with maxmargin posterior regularization that acts like a likelihood term. In analogy to the popular stochastic subgradient methods for deterministic optimization, we present the stochastic subgradient MCMC for efficient po...

متن کامل

Asymptotic Behavior of a Markovian Stochastic Algorithm with Constant Step

We first derive from abstract results on Feller transition kernels that, under some mild assumptions, a Markov stochastic algorithm with constant step size ε usually has a tight family of invariant distributions ν, ε ∈ (0, ε0], whose weak limiting distributions as ε ↓ 0 are all flowinvariant for its ODE. Then the main part of the paper deals with a kind of converse: what are the possible limiti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2023

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2021.3137274