Stochastic alternating direction method of multipliers for Byzantine-robust distributed learning

نویسندگان

چکیده

This paper aims to solve a distributed learning problem under Byzantine attacks. In the underlying master-worker architecture, there exist number of unknown but malicious workers that can send arbitrary messages master deviate process, called workers. literature, total variation (TV) norm-penalized approximation formulation has been investigated alleviate effect To be specific, TV norm penalty not only forces local variables at regular close, is robust outliers sent by as well. For handling separable formulation, we propose Byzantine-robust stochastic alternating direction method multipliers (ADMM). Theoretically, prove proposed converges bounded neighborhood optimal solution rate O(1/k) mild assumptions, where k iterations and size determined Numerical experiments on MNIST COVERTYPE datasets further demonstrate effectiveness various

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adaptive Stochastic Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers (ADMM) has been studied for years. Traditional ADMM algorithms need to compute, at each iteration, an (empirical) expected loss function on all training examples, resulting in a computational complexity proportional to the number of training examples. To reduce the complexity, stochastic ADMM algorithms were proposed to replace the expected loss f...

متن کامل

Fast Stochastic Alternating Direction Method of Multipliers

In this paper, we propose a new stochastic alternating direction method of multipliers (ADMM) algorithm, which incrementally approximates the full gradient in the linearized ADMM formulation. Besides having a low per-iteration complexity as existing stochastic ADMM algorithms, the proposed algorithm improves the convergence rate on convex problems from O ( 1 √ T ) to O ( 1 T ) , where T is the ...

متن کامل

Stochastic Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers (ADMM) has received lots of attention recently due to the tremendous demand from large-scale and data-distributed machine learning applications. In this paper, we present a stochastic setting for optimization problems with non-smooth composite objective functions. To solve this problem, we propose a stochastic ADMM algorithm. Our algorithm applies...

متن کامل

Scalable Stochastic Alternating Direction Method of Multipliers

Alternating direction method of multipliers (ADMM) has been widely used in many applications due to its promising performance to solve complex regularization problems and large-scale distributed optimization problems. Stochastic ADMM, which visits only one sample or a mini-batch of samples each time, has recently been proved to achieve better performance than batch ADMM. However, most stochasti...

متن کامل

Towards optimal stochastic alternating direction method of multipliers: Supplementary material

1. The strongly convex case 1.1. Proof of Lemma 1 Lemma 1. Let f be µ-strongly convex, and let x k+1 , y k+1 and λ k+1 be computed as per Alg. 2. For all x ∈ X and y ∈ Y, and w ∈ Ω, it holds for k ≥ 0 that f (x k) − f (x) + h(y k+1) − h(y) + ⟨w k+1 − w, F (w k+1)⟩ ≤ η k 2 ∥g k ∥ 2 2 − µ 2 ∆ k + 1 2η k [∆ k − ∆ k+1 ] + β 2 [A k − A k+1 ] + 1 2β [L k − L k+1 ] + ⟨δ k , x k − x⟩. By the strong con...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Signal Processing

سال: 2022

ISSN: ['0165-1684', '1872-7557']

DOI: https://doi.org/10.1016/j.sigpro.2022.108501