Fast-and-Light Stochastic ADMM Appendix
ثبت نشده
چکیده
In the second equality, we use (1). In the third equality, we use E‖xi−Exi‖ = E‖xi‖−‖Exi‖. In the second inequality, we use ‖a+b‖ ≤ 2‖a‖+2‖b‖. In the last inequality, we employ the following fact [Xiao and Zhang, 2014]: 1 n ∑n i=1 ‖∇fi(x)− ∇fi(x∗)‖2 ≤ 2Lmax ( f(x)− f(x∗)−∇f(x∗) (x− x∗) ) . .2 Proof of Theorem 1 First, we introduce the following Lemma. Lemma 1. u∗ = − 1 ρ (A T )∇f(x∗). Proof. Consider (4) as a linear system Au = − 1 ρ∇f(x∗) for a random variable u. By [James, 1978], the solutions are given by
منابع مشابه
Fast-and-Light Stochastic ADMM
The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAG-ADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an int...
متن کاملStochastic Variance-Reduced ADMM
The alternating direction method of multipliers (ADMM) is a powerful optimization solver in machine learning. Recently, stochastic ADMM has been integrated with variance reduction methods for stochastic gradient, leading to SAGADMM and SDCA-ADMM that have fast convergence rates and low iteration complexities. However, their space requirements can still be high. In this paper, we propose an inte...
متن کاملFast Stochastic Alternating Direction Method of Multipliers
In this paper, we propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, the proposed algorithm improves the convergence rate on convex problems from O ( 1 √ T ) to O ( 1 T ) , where T is the ...
متن کاملAccelerated Stochastic ADMM with Variance Reduction
Alternating Direction Method of Multipliers (ADMM) is a popular method in solving Machine Learning problems. Stochastic ADMM was firstly proposed in order to reduce the per iteration computational complexity, which is more suitable for big data problems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a fast convergence rate, such as SAG-ADMM an...
متن کاملFast Stochastic Variance Reduced ADMM for Stochastic Composition Optimization
We consider the stochastic composition optimization problem proposed in [17], which has applications ranging from estimation to statistical and machine learning. We propose the first ADMM-based algorithm named com-SVRADMM, and show that com-SVR-ADMM converges linearly for strongly convex and Lipschitz smooth objectives, and has a convergence rate of O(logS/S), which improves upon the O(S−4/9) r...
متن کامل