Learning Compact Neural Networks with Regularization
نویسنده
چکیده
We study the impact of regularization for learning neural networks. Our goal is speeding up training, improving generalization performance, and training compact models that are cost efficient. Our results apply to weight-sharing (e.g. convolutional), sparsity (i.e. pruning), and low-rank constraints among others. We first introduce covering dimension of the constraint set and provide a Rademacher complexity bound providing insights on generalization properties. Then, we propose and analyze regularized gradient descent algorithms for learning shallow networks. We show that problem becomes well conditioned and local linear convergence occurs once the amount of data exceeds covering dimension (e.g. # of nonzero weights). Finally, we provide insights on layerwise training of deep models by studying a random activation model. Our results show how regularization can be beneficial to overcome overparametrization.
منابع مشابه
Combined Group and Exclusive Sparsity for Deep Neural Networks
The number of parameters in a deep neural network is usually very large, which helps with its learning capacity but also hinders its scalability and practicality due to memory/time inefficiency and overfitting. To resolve this issue, we propose a sparsity regularization method that exploits both positive and negative correlations among the features to enforce the network to be sparse, and at th...
متن کاملDomain-invariant Representation Learning
The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weighted sums of moments,...
متن کاملPredictive Abilities of Bayesian Regularization and Levenberg–Marquardt Algorithms in Artificial Neural Networks: A Comparative Empirical Study on Social Data
The objective of this study is to compare the predictive ability of Bayesian regularization with Levenberg–Marquardt Artificial Neural Networks. To examine the best architecture of neural networks, the model was tested with one-, two-, three-, four-, and five-neuron architectures, respectively. MATLAB (2011a) was used for analyzing the Bayesian regularization and Levenberg–Marquardt learning al...
متن کاملCentral Moment Discrepancy (CMD) for Domain-Invariant Representation Learning
The learning of domain-invariant representations in the context of domain adaptation with neural networks is considered. We propose a new regularization method that minimizes the discrepancy between domain-specific latent feature representations directly in the hidden activation space. Although some standard distribution matching approaches exist that can be interpreted as the matching of weigh...
متن کاملReinforcement Learning in Neural Networks: A Survey
In recent years, researches on reinforcement learning (RL) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. Neural network reinforcement learning (NNRL) is among the most popular algorithms in the RL framework. The advantage of using neural networks enables the RL to search for optimal policies more efficiently in several real-life applicat...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1802.01223 شماره
صفحات -
تاریخ انتشار 2018