Multi-class classification via proximal mirror descent

نویسنده

  • Daria Reshetova
چکیده

We consider the problem of multi-class classification and a stochastic optimization approach to it. The idea is to, instead of weighing classes, make use of the total sum of margins as a regularization. As general the problem is hard to solve, we use Bregman divergence as the regularizer and end up with a proximal mirror descent with a specific distance-generating function. The approach is designed for problems with highly unbalanced classes as it makes use of different margins between each class and the rest, therefore emulating the one-vs-all approach. This should decrease the error dependence in terms of the number of classes.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Multi-class classification: mirror descent approach

We consider the problem of multi-class classification and a stochastic optimization approach to it. We derive risk bounds for stochastic mirror descent algorithm and provide examples of set geometries that make the use of the algorithm efficient in terms of error in k.

متن کامل

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

Sparse Q-learning with Mirror Descent

This paper explores a new framework for reinforcement learning based on online convex optimization, in particular mirror descent and related algorithms. Mirror descent can be viewed as an enhanced gradient method, particularly suited to minimization of convex functions in highdimensional spaces. Unlike traditional gradient methods, mirror descent undertakes gradient updates of weights in both t...

متن کامل

On Stochastic Subgradient Mirror-Descent Algorithm with Weighted Averaging

This paper considers stochastic subgradient mirror-descent method for solving constrained convex minimization problems. In particular, a stochastic subgradient mirror-descent method with weighted iterate-averaging is investigated and its per-iterate convergence rate is analyzed. The novel part of the approach is in the choice of weights that are used to construct the averages. Through the use o...

متن کامل

A survey of Algorithms and Analysis for Adaptive Online Learning

We present tools for the analysis of Follow-The-Regularized-Leader (FTRL), Dual Averaging, and Mirror Descent algorithms when the regularizer (equivalently, proxfunction or learning rate schedule) is chosen adaptively based on the data. Adaptivity can be used to prove regret bounds that hold on every round, and also allows for data-dependent regret bounds as in AdaGrad-style algorithms (e.g., O...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017