Mini-Batch Stochastic ADMMs for Nonconvex Nonsmooth Optimization

نویسندگان

  • Feihu Huang
  • Songcan Chen
چکیده

In the paper, we study the mini-batch stochastic ADMMs (alternating direction method of multipliers) for the nonconvex nonsmooth optimization. We prove that, given an appropriate mini-batch size, the mini-batch stochastic ADMM without variance reduction (VR) technique is convergent and reaches the convergence rate of O(1/T ) to obtain a stationary point of the nonconvex optimization, where T denotes the number of iterations. Moreover, we extend the mini-batch stochastic gradient method to both the nonconvex SVRG-ADMM and SAGA-ADMM in our initial paper (Huang et al., 2016), and also prove that these mini-batch stochastic ADMMs reach the convergence rate of O(1/T ) without the condition on the mini-batch size. In particular, we provide a specific parameter selection for step size η of stochastic gradients and penalization parameter ρ of the augmented Lagrangian function. Finally, some experimental results demonstrate the effectiveness of our algorithms.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Fast Stochastic Methods for Nonsmooth Nonconvex Optimization

We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonconvex part is smooth and the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tack...

متن کامل

Proximal Stochastic Methods for Nonsmooth Nonconvex Finite-Sum Optimization

We analyze stochastic algorithms for optimizing nonconvex, nonsmooth finite-sum problems, where the nonsmooth part is convex. Surprisingly, unlike the smooth case, our knowledge of this fundamental problem is very limited. For example, it is not known whether the proximal stochastic gradient method with constant minibatch converges to a stationary point. To tackle this issue, we develop fast st...

متن کامل

Stochastic Variance Reduction for Nonconvex Optimization

We study nonconvex finite-sum problems and analyze stochastic variance reduced gradient (Svrg) methods for them. Svrg and related methods have recently surged into prominence for convex optimization given their edge over stochastic gradient descent (Sgd); but their theoretical analysis almost exclusively assumes convexity. In contrast, we prove non-asymptotic rates of convergence (to stationary...

متن کامل

Benson's algorithm for nonconvex multiobjective problems via nonsmooth Wolfe duality

‎In this paper‎, ‎we propose an algorithm to obtain an approximation set of the (weakly) nondominated points of nonsmooth multiobjective optimization problems with equality and inequality constraints‎. ‎We use an extension of the Wolfe duality to construct the separating hyperplane in Benson's outer algorithm for multiobjective programming problems with subdifferentiable functions‎. ‎We also fo...

متن کامل

Stochastic Recursive Gradient Algorithm for Nonconvex Optimization

In this paper, we study and analyze the mini-batch version of StochAstic Recursive grAdient algoritHm (SARAH), a method employing the stochastic recursive gradient, for solving empirical loss minimization for the case of nonconvex losses. We provide a sublinear convergence rate (to stationary points) for general nonconvex functions and a linear convergence rate for gradient dominated functions,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • CoRR

دوره abs/1802.03284  شماره 

صفحات  -

تاریخ انتشار 2018