Towards optimal stochastic alternating direction method of multipliers: Supplementary material

نویسندگان

  • Samaneh Azadi
  • Suvrit Sra
چکیده

1. The strongly convex case 1.1. Proof of Lemma 1 Lemma 1. Let f be µ-strongly convex, and let x k+1 , y k+1 and λ k+1 be computed as per Alg. 2. For all x ∈ X and y ∈ Y, and w ∈ Ω, it holds for k ≥ 0 that f (x k) − f (x) + h(y k+1) − h(y) + ⟨w k+1 − w, F (w k+1)⟩ ≤ η k 2 ∥g k ∥ 2 2 − µ 2 ∆ k + 1 2η k [∆ k − ∆ k+1 ] + β 2 [A k − A k+1 ] + 1 2β [L k − L k+1 ] + ⟨δ k , x k − x⟩. By the strong convexity of f , we have f (x k) − f (x) ≤ ⟨f ′ (x k), x k − x⟩ − µ 2 ∥x k − x∥ 2 2. (2) As before, using δ k = f ′ (x k) − g k ; but this time we split the f ′ (x k) term differently: ⟨f ′ Now for the first part, we just follow the derivation of (Ouyang et al., 2013), before it comes to the critical difference, namely inequality (9). However, for the reader's convenience we include all the details below. From the optimality condition of Line 2, it follows that ⟨g k + βA T (Ax k+1 + By k − b) − A T λ k + η −1 k (x k+1 − x k), x − x k+1 ⟩ ≥ 0, ∀x ∈ X. Rearranging this inequality, we obtain ⟨g k , x k+1 − x⟩ ≤ ⟨βA T (Ax k+1 + By k − b) − A T λ k , x − x k+1 ⟩ + 1 η k ⟨x k+1 − x k , x − x k+1 ⟩, so that a rearrangement similar to (20) yields ⟨g k , x k+1 − x⟩ ≤ ⟨βA T (Ax k+1 + By k − b) − A T λ k , x − x k+1 ⟩ + 1 2η k [ ∥x − x k ∥ 2 2 − ∥x − x k+1 ∥ 2 2 − ∥x k+1 − x k ∥ 2 2 ] .

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Towards an optimal stochastic alternating direction method of multipliers

We study regularized stochastic convex optimization subject to linear equality constraints. This class of problems was recently also studied by Ouyang et al. (2013) and Suzuki (2013); both introduced similar stochastic alternating direction method of multipliers (SADMM) algorithms. However, the analysis of both papers led to suboptimal convergence rates. This paper presents two new SADMM method...

متن کامل

Modified Convex Data Clustering Algorithm Based on Alternating Direction Method of Multipliers

Knowing the fact that the main weakness of the most standard methods including k-means and hierarchical data clustering is their sensitivity to initialization and trapping to local minima, this paper proposes a modification of convex data clustering  in which there is no need to  be peculiar about how to select initial values. Due to properly converting the task of optimization to an equivalent...

متن کامل

Stochastic Dual Coordinate Ascent with Alternating Direction Method of Multipliers

We propose a new stochastic dual coordinate ascent technique that can be applied to a wide range of regularized learning problems. Our method is based on Alternating Direction Method of Multipliers (ADMM) to deal with complex regularization functions such as structured regularizations. Our method can naturally afford mini-batch update and it gives speed up of convergence. We show that, under mi...

متن کامل

Supplementary Material: Proximal Deep Structured Models

In this supplementary material we first show the analogy between other proximal methods and our proposed deep structured model, including proximal gradient method and alternating direction method of multipliers. After that, we provide more quantitive results on the three experiments. 1 More Proximal Algorithms Examples Let us the consider the problem we defined in Eq. 1 in our main submission. ...

متن کامل

Fast Stochastic Alternating Direction Method of Multipliers

In this paper, we propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, the proposed algorithm improves the convergence rate on convex problems from O ( 1 √ T ) to O ( 1 T ) , where T is the ...

متن کامل

Adaptive Stochastic Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers (ADMM) has been studied for years. Traditional ADMM algorithms need to compute, at each iteration, an (empirical) expected loss function on all training examples, resulting in a computational complexity proportional to the number of training examples. To reduce the complexity, stochastic ADMM algorithms were proposed to replace the expected loss f...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014