A Simple Parallel Algorithm with an O(1/t) Convergence Rate for General Convex Programs

نویسندگان

  • Hao Yu
  • Michael J. Neely
چکیده

This paper considers convex programs with a general (possibly non-differentiable) convex objective function and Lipschitz continuous convex inequality constraint functions. A simple algorithm is developed and achieves an O(1/t) convergence rate. Similar to the classical dual subgradient algorithm and the ADMM algorithm, the new algorithm has a parallel implementation when the objective and constraint functions are separable. However, the new algorithm has a faster O(1/t) convergence rate compared with the best known O(1/ √ t) convergence rate for the dual subgradient algorithm with primal averaging. Further, it can solve convex programs with nonlinear constraints, which cannot be handled by the ADMM algorithm. The new algorithm is applied to a multipath network utility maximization problem and yields a decentralized flow control algorithm with the fast O(1/t) convergence rate.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Global convergence of an inexact interior-point method for convex quadratic‎ ‎symmetric cone programming‎

‎In this paper‎, ‎we propose a feasible interior-point method for‎ ‎convex quadratic programming over symmetric cones‎. ‎The proposed algorithm relaxes the‎ ‎accuracy requirements in the solution of the Newton equation system‎, ‎by using an inexact Newton direction‎. ‎Furthermore‎, ‎we obtain an‎ ‎acceptable level of error in the inexact algorithm on convex‎ ‎quadratic symmetric cone programmin...

متن کامل

Networked Parallel Algorithms for Robust Convex Optimization via the Scenario Approach

This paper proposes a parallel computing framework to distributedly solve robust convex optimization (RCO) when the constraints are affected by nonlinear uncertainty. To this end, we adopt a scenario approach by randomly sampling the uncertainty set. To facilitate the computational task, instead of using a single centralized processor to obtain a “global solution” of the scenario problem (SP), ...

متن کامل

Parallel Multi-Block ADMM with o(1 / k) Convergence

This paper introduces a parallel and distributed extension to the alternating direction method of multipliers (ADMM) for solving convex problem: minimize f1(x1) + ∙ ∙ ∙ + fN (xN ) subject to A1x1 + ∙ ∙ ∙ + ANxN = c, x1 ∈ X1, . . . , xN ∈ XN . The algorithm decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This Jacobian-type algorithm is we...

متن کامل

Accelerated Variance Reduced Stochastic ADMM

Recently, many variance reduced stochastic alternating direction method of multipliers (ADMM) methods (e.g. SAGADMM, SDCA-ADMM and SVRG-ADMM) have made exciting progress such as linear convergence rates for strongly convex problems. However, the best known convergence rate for general convex problems is O(1/T ) as opposed to O(1/T ) of accelerated batch algorithms, where T is the number of iter...

متن کامل

Stochastic gradient descent algorithms for strongly convex functions at O(1/T) convergence rates

With a weighting scheme proportional to t, a traditional stochastic gradient descent (SGD) algorithm achieves a high probability convergence rate of O(κ/T ) for strongly convex functions, instead of O(κ ln(T )/T ). We also prove that an accelerated SGD algorithm also achieves a rate of O(κ/T ).

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 27  شماره 

صفحات  -

تاریخ انتشار 2017