Decentralized nonconvex optimization with guaranteed privacy and accuracy

نویسندگان

چکیده

Privacy protection and nonconvexity are two challenging problems in decentralized optimization learning involving sensitive data. Despite some recent advances addressing each of the separately, no results have been reported that theoretical guarantees on both privacy saddle/maximum avoidance nonconvex optimization. We propose a new algorithm for can enable rigorous differential avoiding performance. The allows incorporation persistent additive noise to data samples, gradients, intermediate variables without losing provable convergence, thus circumventing dilemma trading accuracy design. More interestingly, is theoretically proven be able efficiently guarantee by convergence local maxima saddle points, which has not before literature efficient communication (it only shares one variable iteration) computation encryption-free), hence promising large-scale high-dimensional parameters. Numerical experiments estimation problem an Independent Component Analysis (ICA) confirm effectiveness proposed approach.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Enigma: Decentralized Computation Platform with Guaranteed Privacy

A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable for holding secret-shared data. An external blockch...

متن کامل

Privacy-preserving Decentralized Optimization Based on ADMM

In this paper, we address the problem of privacypreservation in decentralized optimization, where N agents cooperatively minimize an objective function that is the sum of N strongly convex functions private to these individual agents. In most existing decentralized optimization approaches, participating agents exchange and disclose estimates explicitly, which may not be desirable when the estim...

متن کامل

On Nonconvex Decentralized Gradient Descent

Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for convex consensus optimization. However, on consensus optimization with nonconvex objective functions, our understanding to the behavior of these algorithms is limited. When we lose convexity, we cannot hope for obtaining globally optimal solutions (though we st...

متن کامل

Convex Optimization with Nonconvex Oracles

In machine learning and optimization, one often wants to minimize a convex objective function F but can only evaluate a noisy approximation F̂ to it. Even though F is convex, the noise may render F̂ nonconvex, making the task of minimizing F intractable in general. As a consequence, several works in theoretical computer science, machine learning and optimization have focused on coming up with pol...

متن کامل

On Guaranteed Accuracy Computation

The concept of guaranteed accuracy computation is a natural one: the user could specify any á priori relative or absolute precision bound on the numerical values which are to be computed in an algorithm. It is a generalization of guaranteed sign computation, a concept originally proposed to solve the ubiquitous problem of non-robustness in geometric algorithms. In this paper, we investigate som...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Automatica

سال: 2023

ISSN: ['1873-2836', '0005-1098']

DOI: https://doi.org/10.1016/j.automatica.2023.110858