Hedge Algorithm and Subgradient Methods

نویسندگان

  • Michel Baes
  • Michael Buergisser
چکیده

We show that the Hedge Algorithm, a method widely used in Machine Learning, can be interpreted as a particular subgradient algorithm for minimizing a well-chosen convex function, namely a Mirror Descent Scheme. Using this reformulation, we can improve slightly the worstcase convergence guarantees of the Hedge Algorithm. Recently, Nesterov has introduced the class of Primal-Dual Subgradient Algorithms for convex optimization, which generalizes Mirror Descent Schemes. Using Nesterov’s insights, we derive new update rules for the Hedge Algorithm. Our numerical experiments show that these new update rules perform consistently better than the standard Hedge Algorithm.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A new Levenberg-Marquardt approach based on Conjugate gradient structure for solving absolute value equations

In this paper, we present a new approach for solving absolute value equation (AVE) whichuse Levenberg-Marquardt method with conjugate subgradient structure. In conjugate subgradientmethods the new direction obtain by combining steepest descent direction and the previous di-rection which may not lead to good numerical results. Therefore, we replace the steepest descentdir...

متن کامل

Lecture 2: Subgradient Methods

In this lecture, we discuss first order methods for the minimization of convex functions. We focus almost exclusively on subgradient-based methods, which are essentially universally applicable for convex optimization problems, because they rely very little on the structure of the problem being solved. This leads to effective but slow algorithms in classical optimization problems, however, in la...

متن کامل

Subgradient Methods for Saddle-Point Problems

We consider computing the saddle points of a convex-concave function using subgradient methods. The existing literature on finding saddle points has mainly focused on establishing convergence properties of the generated iterates under some restrictive assumptions. In this paper, we propose a subgradient algorithm for generating approximate saddle points and provide per-iteration convergence rat...

متن کامل

Radial Subgradient Descent

We present a subgradient method for minimizing non-smooth, non-Lipschitz convex optimization problems. The only structure assumed is that a strictly feasible point is known. We extend the work of Renegar [1] by taking a different perspective, leading to an algorithm which is conceptually more natural, has notably improved convergence rates, and for which the analysis is surprisingly simple. At ...

متن کامل

2753 1 Approximate Primal Solutions and Rate Analysis for Dual Subgradient Methods ∗

We study primal solutions obtained as a by-product of subgradient methods when solving the Lagrangian dual of a primal convex constrained optimization problem (possibly nonsmooth). The existing literature on the use of subgradient methods for generating primal optimal solutions is limited to the methods producing such solutions only asymptotically (i.e., in the limit as the number of subgradien...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010