Triggered Gradient Tracking for asynchronous distributed optimization
نویسندگان
چکیده
This paper proposes Asynchronous Triggered Gradient Tracking , i.e., a distributed optimization algorithm to solve consensus over networks with asynchronous communication. As building block, we devise the continuous-time counterpart of recently proposed (discrete-time) gradient tracking called Continuous . By using Lyapunov approach, prove exponential stability equilibrium corresponding agents’ estimates being consensual optimal solution, arbitrary initialization local estimates. Then, propose two triggered versions algorithm. In first one, agents continuously integrate their dynamics and exchange neighbors current variables in synchronous way. totally scheme which each agent sends its based on triggering condition that depends locally verifiable condition. The protocol preserves linear convergence avoids Zeno behavior, an infinite number events finite interval time is excluded. analysis as preparatory result, show point holds for both algorithms any estimate initialization. Finally, simulations validate effectiveness methods data analytics problem, showing also improved performance terms inter-agent
منابع مشابه
Asynchronous Distributed Semi-Stochastic Gradient Optimization
With the recent proliferation of large-scale learning problems, there have been a lot of interest on distributed machine learning algorithms, particularly those that are based on stochastic gradient descent (SGD) and its variants. However, existing algorithms either suffer from slow convergence due to the inherent variance of stochastic gradients, or have a fast linear convergence rate but at t...
متن کاملAn Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
We propose a distributed first-order augmented Lagrangian (DFAL) algorithm to minimize the sum of composite convex functions, where each term in the sum is a private cost function belonging to a node, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that a...
متن کاملAn Asynchronous Distributed Proximal Gradient Method for Composite Convex Optimization
xi=x̄i when ‖∇xif(x̄)‖2 ≤ λBi, it follows that x̄i = x̄i if and only if ‖∇xif(x̄)‖2 ≤ λBi. Hence, hi(x̄ ∗ i ) = 0. Case 2: Suppose that i ∈ Ic := N \ I, i.e., ‖∇xif(x̄)‖2 > λBi. In this case, x̄i 6= x̄i. From the first-order optimality condition, we have ∇xif(x̄) + Li(x̄i − x̄i) + λBi x̄ ∗ i −x̄i ‖x̄i −x̄i‖2 = 0. Let si := x̄∗i −x̄i ‖x̄i −x̄i‖2 and ti := ‖x̄i − x̄i‖2, then si = −∇xif(x̄) Liti+λBi . Since ‖si‖2 = 1, i...
متن کاملAsynchronous Parallel Stochastic Gradient for Nonconvex Optimization
Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provi...
متن کاملAsynchronous Forward-Bounding for Distributed Constraints Optimization
A new search algorithm for solving distributed constraint optimization problems (DisCOPs) is presented. Agents assign variables sequentially and propagate their assignments asynchronously. The asynchronous forward-bounding algorithm (AFB) is a distributed optimization search algorithm that keeps one consistent partial assignment at all times. Forward bounding propagates the bounds on the cost o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Automatica
سال: 2023
ISSN: ['1873-2836', '0005-1098']
DOI: https://doi.org/10.1016/j.automatica.2022.110726