Learning with Regularization Networks
نویسنده
چکیده
In this work we study and develop learning algorithms for networks based on regularization theory. In particular, we focus on learning possibilities for a family of regularization networks and radial basis function networks (RBF networks). The framework above the basic algorithm derived from theory is designed. It includes an estimation of a regularization parameter and a kernel function by minimization of cross-validation error. Two composite types of kernel functions are proposed — a sum kernel and a product kernel — in order to deal with heterogenous or large data. Three learning approaches for the RBF networks — the gradient learning, three-step learning, and genetic learning — are discussed. Based on these, two hybrid approaches are proposed — the four-step learning and the hybrid genetic learning. All learning algorithms for the regularization networks and the RBF networks are studied experimentally and thoroughly compared. We claim that the regularization networks and the RBF networks are comparable in terms of generalization error, but they differ with respect to their model complexity. The regularization network approach usually leads to solutions with higher number of base units, thus, the RBF networks can be used as a ’cheaper’ alternative in terms of model size and learning time.
منابع مشابه
Predictive Abilities of Bayesian Regularization and Levenberg–Marquardt Algorithms in Artificial Neural Networks: A Comparative Empirical Study on Social Data
The objective of this study is to compare the predictive ability of Bayesian regularization with Levenberg–Marquardt Artificial Neural Networks. To examine the best architecture of neural networks, the model was tested with one-, two-, three-, four-, and five-neuron architectures, respectively. MATLAB (2011a) was used for analyzing the Bayesian regularization and Levenberg–Marquardt learning al...
متن کاملRegularization by Intrinsic Plasticity and Its Synergies with Recurrence for Random Projection Methods
Neural networks based on high-dimensional random feature generation have become popular under the notions extreme learning machine (ELM) and reservoir computing (RC). We provide an in-depth analysis of such networks with respect to feature selection, model complexity, and regularization. Starting from an ELM, we show how recurrent connections increase the effective complexity leading to reservo...
متن کاملLearning Scale Free Networks by Reweighted L1 regularization
Methods for `1-type regularization have been widely used in Gaussian graphical model selection tasks to encourage sparse structures. However, often we would like to include more structural information than mere sparsity. In this work, we focus on learning so-called “scale-free” models, a common feature that appears in many real-work networks. We replace the `1 regularization with a power law re...
متن کاملLearning Scale Free Networks by Reweighted `1 regularization
Methods for `1-type regularization have been widely used in Gaussian graphical model selection tasks to encourage sparse structures. However, often we would like to include more structural information than mere sparsity. In this work, we focus on learning so-called “scale-free” models, a common feature that appears in many real-work networks. We replace the `1 regularization with a power law re...
متن کاملLearning Compact Neural Networks with Regularization
We study the impact of regularization for learning neural networks. Our goal is speeding up training, improving generalization performance, and training compact models that are cost efficient. Our results apply to weight-sharing (e.g. convolutional), sparsity (i.e. pruning), and low-rank constraints among others. We first introduce covering dimension of the constraint set and provide a Rademach...
متن کامل