Supplementary Material: Geometric Descent Method for Convex Composite Minimization

نویسندگان

  • Shixiang Chen
  • Shiqian Ma
  • Wei Liu
چکیده

We argue that the geometric intuition of GeoPG is still clear. Note that we are still constructing two balls that contain x∗ and shrink at the same absolute amount. In GeoPG, since we assume that the smooth function f is strongly convex, we naturally have one ball that contains x∗, and this ball is related to the proximal gradient Gt, instead of the gradient due to the presence of the nonsmooth function h. To construct the other ball, GeoD needs to perform an exact line search, while our GeoPG needs to find the root of a newly constructed function φ̄, which is again due to the presence of the nonsmooth function h. The two changes of GeoPG from GeoD are: replace gradient by proximal gradient; replace the exact line search by finding the root of φ̄, both of which are resulted by the presence of the nonsmooth function h.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Geometric Descent Method for Convex Composite Minimization

In this paper, we extend the geometric descent method recently proposed by Bubeck, Lee and Singh [5] to solving nonsmooth and strongly convex composite problems. We prove that the resulting algorithm, GeoPG, converges with a linear rate (1− 1/√κ), thus achieves the optimal rate among first-order methods, where κ is the condition number of the problem. Numerical results on linear regression and ...

متن کامل

UniVR: A Universal Variance Reduction Framework for Proximal Stochastic Gradient Method

We revisit an important class of composite stochastic minimization problems that often arises from empirical risk minimization settings, such as Lasso, Ridge Regression, and Logistic Regression. We present a new algorithm UniVR based on stochastic gradient descent with variance reduction. Our algorithm supports non-strongly convex objectives directly, and outperforms all of the state-of-the-art...

متن کامل

An Accelerated Proximal Coordinate Gradient Method

We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems. In particular, our method achieves faster linear convergence rates for minimizing strongly convex functions than existing randomized proximal coordinate gradient methods. We show how to apply the APCG method to solve the dual of the regularized em...

متن کامل

Randomized Block Coordinate Descent for Online and Stochastic Optimization

Two types of low cost-per-iteration gradient descent methods have been extensively studied in parallel. One is online or stochastic gradient descent ( OGD/SGD), and the other is randomzied coordinate descent (RBCD). In this paper, we combine the two types of methods together and propose online randomized block coordinate descent (ORBCD). At each iteration, ORBCD only computes the partial gradie...

متن کامل

Randomized block proximal damped Newton method for composite self-concordant minimization

In this paper we consider the composite self-concordant (CSC) minimization problem, which minimizes the sum of a self-concordant function f and a (possibly nonsmooth) proper closed convex function g. The CSC minimization is the cornerstone of the path-following interior point methods for solving a broad class of convex optimization problems. It has also found numerous applications in machine le...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017