Robust Distributed Optimization With Randomly Corrupted Gradients

نویسندگان

چکیده

In this paper, we propose a first-order distributed optimization algorithm that is provably robust to Byzantine failures-arbitrary and potentially adversarial behavior, where all the participating agents are prone failure. We model each agent's state over time as two-state Markov chain indicates or trustworthy behaviors at different instants. set no restrictions on maximum number of any given time. design our method based three layers defense: 1) temporal aggregation, 2) spatial 3) gradient normalization. study two settings for stochastic optimization, namely Sample Average Approximation Stochastic Approximation. provide convergence guarantees strongly convex smooth non-convex cost functions.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Robust Matrix Completion with Corrupted Columns

This paper considers the problem of matrix completion, when some number of the columns are arbitrarily corrupted, potentially by a malicious adversary. It is well-known that standard algorithms for matrix completion can return arbitrarily poor results, if even a single column is corrupted. What can be done if a large number, or even a constant fraction of columns are corrupted? In this paper, w...

متن کامل

Distributed robust estimation over randomly switching networks using H∞ consensus

The paper considers a distributed robust estimation problem over a network with Markovian randomly varying topology. The objective is to deal with network variations locally, by switching observer gains at affected nodes only. We propose sufficient conditions which guarantee a suboptimal H∞ level of relative disagreement of estimates in such observer networks. When the status of the network is ...

متن کامل

Bayesian Optimization with Gradients

Bayesian optimization has been successful at global optimization of expensiveto-evaluate multimodal objective functions. However, unlike most optimization methods, Bayesian optimization typically does not use derivative information. In this paper we show how Bayesian optimization can exploit derivative information to find good solutions with fewer objective function evaluations. In particular, ...

متن کامل

Robust optimization of distributed parameter systems

In this paper, we will discuss the use of new methods from robust control and especially H°° theory for the explicit construction optimal feedback compensators for several practical distributed parameter systems. Indeed, based on operator and interpolation theoretic methods one can now solve the standard H control problem for a broad class of systems modelled by PDEs. In our approach, the compl...

متن کامل

Population growth with randomly distributed jumps

The growth of populations with continuous deterministic and random jump components is treated. Three special models in which random jumps occur at the time of events of a Poisson process and admit formal explicit solutions are considered: A) Logistic growth with random disasters having exponentially distributed amplitudes; B) Logistic growth with random disasters causing the removal of a unifor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Signal Processing

سال: 2022

ISSN: ['1053-587X', '1941-0476']

DOI: https://doi.org/10.1109/tsp.2022.3185885