نتایج جستجو برای: bootstrap aggregating

تعداد نتایج: 18325  

2002
Marina Skurichina Ludmila I. Kuncheva Robert P. W. Duin

In combining classifiers, it is believed that diverse ensembles perform better than non-diverse ones. In order to test this hypothesis, we study the accuracy and diversity of ensembles obtained in bagging and boosting applied to the nearest mean classifier. In our simulation study we consider two diversity measures: the Q statistic and the disagreement measure. The experiments, carried out on f...

1995
Kamal M. Ali Kamal Ali

Most previous work on multiple models has been done on a few domains. We present a com-parsion of three ways of learning multiple models on 29 data sets from the UCI repository. The methods are bagging, k-fold partition learning and stochastic search. By using 29 data sets of various kinds-artiicial data sets, artiicial data sets with noise, molecular-biology and real-world noisy data sets-we a...

2009
Jerzy Blaszczynski Jerzy Stefanowski Magdalena Zajac

The role of abstaining from prediction by component classifiers in rule ensembles is discussed. We consider bagging and Ivotes approaches to construct such ensembles. In our proposal, component classifiers are based on unordered sets of rules with a classification strategy that solves ambiguous matching of the object’s description to the rules. We propose to induce rule sets by a sequential cov...

2008
Dimitris N. Politis

The problem of large-scale simultaneous hypothesis testing is revisited. Bagging and subagging procedures are put forth with the purpose of improving the discovery power of the tests. The procedures are implemented in both simulated and real data. It is shown that bagging and subagging significantly improve power at the cost of a small increase in false discovery rate with the proposed ‘maximum...

2004
Jerzy Stefanowski

An application of the rule induction algorithm MODLEM to construct multiple classifiers is studied. Two different such classifiers are considered: the bagging approach, where classifiers are generated from different samples of the learning set, and the n-classifier, which is specialized for solving multiple class learning problems. This paper reports results of an experimental comparison of the...

2003
Simon Ka-Lung Ho Brian Kan-Wing Mak

Verification problems are usually posted as a 2-class problem and the objective is to verify if an observation belongs to a class, say, A or its complement A’. However, we find that in a computer-assisted language learning application, because of the relatively low reliability of phoneme verification — with an equal-error-rate of more than 30% — a system built on conventional phoneme verificati...

Journal: :Neural Computation 1997
Michiaki Taniguchi Volker Tresp

We compare the performance of averaged regularized estimators. We show that the improvement in performance which can be achieved by averaging depends critically on the degree of regularization which is used in training the individual estimators. We compare four different averaging approaches: simple averaging, bagging, variance-based weighting and variance-based bagging. In any of the averaging...

2012
François-Marie Giraud Thierry Artières

The authorship attribution literature demonstrates the difficulty to design classifiers overcoming simple strategies such as linear classifiers operating on a number, most frequent, of lexical features such as character trigrams. We claim this comes, at least partially, from the difficulty to efficiently learn the contribution of all features, which leads to either undertraining or overtraining...

Journal: :Knowl.-Based Syst. 2014
Qinghua Hu Leijun Li Xiangqian Wu Gerald Schaefer Daren Yu

Margin distribution is acknowledged as an important factor for improving the generalization performance of classifiers. In this paper, we propose a novel ensemble learning algorithm named Double Rotation Margin Forest (DRMF), that aims to improve the margin distribution of the combined system over the training set. We utilise random rotation to produce diverse base classifiers, and optimize the...

2000
Alexey Tsymbal

Decision committee learning has demonstrated spectacular success in reducing classification error from learned classifiers. These techniques develop a classifier in the form of a committee of subsidiary classifiers. The combination of outputs is usually performed by majority vote. Voting, however, has a shortcoming. It is unable to take into account local expertise. When a new instance is diffi...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید