Quasi-Newton updating for large-scale distributed learning

نویسندگان

چکیده

Abstract Distributed computing is critically important for modern statistical analysis. Herein, we develop a distributed quasi-Newton (DQN) framework with excellent statistical, computation, and communication efficiency. In the DQN method, no Hessian matrix inversion or needed. This considerably reduces computation complexity of proposed method. Notably, related existing methods only analyse numerical convergence require diverging number iterations to converge. However, investigate properties method theoretically demonstrate that resulting estimator statistically efficient over small under mild conditions. Extensive analyses finite sample performance.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Stochastic Quasi-Newton Method for Large-Scale Optimization

The question of how to incorporate curvature information in stochastic approximation methods is challenging. The direct application of classical quasiNewton updating techniques for deterministic optimization leads to noisy curvature estimates that have harmful effects on the robustness of the iteration. In this paper, we propose a stochastic quasi-Newton method that is efficient, robust and sca...

متن کامل

A quasi-Newton algorithm for large-scale nonlinear equations

In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algor...

متن کامل

A Distributed Newton Method for Large Scale Consensus Optimization

In this paper, we propose a distributed Newton method for consensus optimization. Our approach outperforms state-of-the-art methods, including ADMM. The key idea is to exploit the sparsity of the dual Hessian and recast the computation of the Newton step as one of efficiently solving symmetric diagonally dominant linear equations. We validate our algorithm both theoretically and empirically. On...

متن کامل

On Solving Large-scale Limited-memory Quasi-newton Equations

We consider the problem of solving linear systems of equations with limited-memory members of the restricted Broyden class and symmetric rank-one matrices. In this paper, we present various methods for solving these linear systems, and propose a new approach based on a practical implementation of the compact representation for the inverse of these limited-memory matrices. Using the proposed app...

متن کامل

Block Splitting for Large-Scale Distributed Learning

Machine learning and statistics with very large datasets is now a topic of widespread interest, both in academia and industry. Many such tasks can be posed as convex optimization problems, so algorithms for distributed convex optimization serve as a powerful, general-purpose mechanism for training a wide class of models on datasets too large to process on a single machine. In previous work, it ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of The Royal Statistical Society Series B-statistical Methodology

سال: 2023

ISSN: ['1467-9868', '1369-7412']

DOI: https://doi.org/10.1093/jrsssb/qkad059