L1 regularized projection pursuit for additive model learning
نویسندگان
چکیده
In this paper, we present a L1 regularized projection pursuit algorithm for additive model learning. Two new algorithms are developed for regression and classification respectively: sparse projection pursuit regression and sparse Jensen-Shannon Boosting. The introduced L1 regularized projection pursuit encourages sparse solutions, thus our new algorithms are robust to overfitting and present better generalization ability especially in settings with many irrelevant input features and noisy data. To make the optimization with L1 regularization more efficient, we develop an ”informative feature first” sequential optimization algorithm. Extensive experiments demonstrate the effectiveness of our proposed approach.
منابع مشابه
Efficient Parametric Projection Pursuit Density Estimation
Product models of low dimensional experts are a powerful way to avoid the curse of dimensionality. We present the "under complete product of experts" (UPoE), where each expert models a one dimensional pro jection of the data. The UPoE may be inter preted as a parametric probabilistic model for projection pursuit. Its ML learning rules are identical to the approximate learning rules proposed ...
متن کاملIterative Nearest Neighbors
Representing data as a linear combination of a set of selected known samples is of interest for various machine learning applications such as dimensionality reduction or classification. k-Nearest Neighbors (kNN) and its variants are still among the best-known and most often used techniques. Some popular richer representations are Sparse Representation (SR) based on solving an l1-regularized lea...
متن کاملEfficient L1/Lq Norm Regularization
Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1/lq norm with q > 1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due t...
متن کاملGreedy Algorithms for Sparse Reinforcement Learning
Feature selection and regularization are becoming increasingly prominent tools in the efforts of the reinforcement learning (RL) community to expand the reach and applicability of RL. One approach to the problem of feature selection is to impose a sparsity-inducing form of regularization on the learning method. Recent work on L1 regularization has adapted techniques from the supervised learning...
متن کاملNon-parametric Group Orthogonal Matching Pursuit for Sparse Learning with Multiple Kernels
We consider regularized risk minimization in a large dictionary of Reproducing kernel Hilbert Spaces (RKHSs) over which the target function has a sparse representation. This setting, commonly referred to as Sparse Multiple Kernel Learning (MKL), may be viewed as the non-parametric extension of group sparsity in linear models. While the two dominant algorithmic strands of sparse learning, namely...
متن کامل