Distributed forward-backward methods for ring networks

نویسندگان

چکیده

Abstract In this work, we propose and analyse forward-backward-type algorithms for finding a zero of the sum finitely many monotone operators, which are not based on reduction to two operator inclusion in product space. Each iteration studied requires one resolvent evaluation per set-valued operator, forward cocoercive evaluations operator. Unlike existing methods, structure proposed suitable distributed, decentralised implementation ring networks without needing global summation enforce consensus between nodes.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Synthetic Gradient Methods with Virtual Forward-backward Networks

The concept of synthetic gradient introduced by Jaderberg et al. (2016) provides an avant-garde framework for asynchronous learning of neural network. Their model, however, has a weakness in its construction, because the structure of their synthetic gradient has little relation to the objective function of the target task. In this paper we introduce virtual forward-backward networks (VFBN). VFB...

متن کامل

Hierarchical Neural Networks with Forward-Backward Training

A forward-backward training algorithm for parallel, self-organizing hierachical neural networks (PSHNN's) is described. Using linear algebra, it is shown that the forward-backward training of an n-stage PSHNN until convergence is equivalent to the pseudo-inverse solution for a single, total network designed in the leastsquares sense with the total input vector consisting of the actual input vec...

متن کامل

Forward Backward Similarity Search in Knowledge Networks

Similarity search is a fundamental problem in social and knowledge networks like GitHub, DBLP, Wikipedia, etc. Existing network similarity measures are limited because they only consider similarity from the perspective of the query node. However, due to the complicated topology of real-world networks, ignoring the preferences of target nodes often results in odd or unintuitive performance. In t...

متن کامل

Forward-backward retraining of recurrent neural networks

This paper describes the training of a recurrent neural network as the letter posterior probability estimator for a hidden Markov model, off-line handwriting recognition system. The network estimates posterior distributions for each of a series of frames representing sections of a handwritten word. The supervised training algorithm, backpropagation through time, requires target outputs to be pr...

متن کامل

Forward-backward Truncated Newton Methods for Convex Composite Optimization1

This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as the unconstrained minimization of a continuously differentiable function, namely the forward-backward envelope (FBE). The first algorithm is based on a standard line search strategy, whereas the second...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational Optimization and Applications

سال: 2022

ISSN: ['0926-6003', '1573-2894']

DOI: https://doi.org/10.1007/s10589-022-00400-z