Online Learning Via Regularized Frequent Directions

نویسندگان

  • Luo Luo
  • Cheng Chen
  • Zhihua Zhang
  • Wu-Jun Li
چکیده

Online Newton step algorithms usually achieve good performance with less training samples than first order methods, but require higher space and time complexity in each iteration. In this paper, we develop a new sketching strategy called regularized frequent direction (RFD) to improve the performance of online Newton algorithms. Unlike the standard frequent direction (FD) which only maintains a sketching matrix, the RFD introduces a regularization term additionally. The regularization provides an adaptive stepsize for update, which makes the algorithm more stable. The RFD also reduces the approximation error of FD with almost the same cost and makes the online learning more robust to hyperparameters. Empirical studies demonstrate that our approach outperforms sate-of-the-art second order online learning algorithms.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Manifold Identification for Regularized Stochastic Online Learning Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning

Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term whose purpose is to induce structure in the solution, the solution often lies on a low-dimensional manifold of parameter space along w...

متن کامل

Cycles in Adversarial Regularized Learning

Regularized learning is a fundamental technique in online optimization, machine learning and many other fields of computer science. A natural question that arises in these settings is how regularized learning algorithms behave when faced against each other. We study a natural formulation of this problem by coupling regularized learning dynamics in zero-sum games. We show that the system’s behav...

متن کامل

Regularized Distance Metric Learning: Theory and Algorithm

In this paper, we examine the generalization error of regularized distance metric learning. We show that with appropriate constraints, the generalization error of regularized distance metric learning could be independent from the dimensionality, making it suitable for handling high dimensional data. In addition, we present an efficient online learning algorithm for regularized distance metric l...

متن کامل

Scaling Limit: Exact and Tractable Analysis of Online Learning Algorithms with Applications to Regularized Regression and PCA

We present a framework for analyzing the exact dynamics of a class of online learning algorithms in the highdimensional scaling limit. Our results are applied to two concrete examples: online regularized linear regression and principal component analysis. As the ambient dimension tends to infinity, and with proper time scaling, we show that the time-varying joint empirical measures of the targe...

متن کامل

Online Learning with Regularized Kernel for One-class Classification

This paper presents an online learning with regularized kernel based one-class extreme learning machine (ELM) classifier and is referred as “online RK-OC-ELM”. The baseline kernel hyperplane model considers whole data in a single chunk with regularized ELM approach for offline learning in case of one-class classification (OCC). Further, the basic hyper plane model is adapted in an online fashio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1705.05067  شماره 

صفحات  -

تاریخ انتشار 2017