نتایج جستجو برای: boosting and bagging strategies
تعداد نتایج: 16865484 فیلتر نتایج به سال:
A practical and useful notion of weak dependence between many classifiers constructed with the same training data is introduced. It is shown that when (a) this weak dependence is rather low, and (b) the expected margins are large, exponential bounds on the true error rates can be achieved. Empirical results with randomized trees, and trees constructed via boosting and adaptive bagging, show tha...
We introduce a multiple classifier system which incorporates a genetic algorithm in order to simultaneously and dynamically select not only the participating classifiers but also the combination rule to be used. In this paper we focus on exploring the efficiency of such an evolutionary algorithm with respect to the behaviour of the resulting multiexpert configurations. To this end we initially ...
In this paper we propose the framework of Monte Carlo algorithms as a useful one to analyze ensemble learning. In particular, this framework allows one to guess when bagging wil l be useful, explains why increasing the margin improves performances, and suggests a new way of performing ensemble learning and error estimation.
In this work we present a novel approach to ensemble learning for regression models, by combining the ensemble generation technique of random subspace method with the ensemble integration methods of Stacked Regression and Dynamic Selection. We show that for simple regression methods such as global linear regression and nearest neighbours, this is a more effective method than the popular ensembl...
Recently, many authors have proposed new algorithms to improve the accuracy of certain classifiers on artificial and real data sets. The goal is to assemble a collection of individual classifiers based on resampling of data set. Bagging (Breiman, 1996) and AdaBoost (Freund & Schapire, 1997) are the most used procedures: the first fits many classifiers to bootstrap samples of data and classifies...
Massive Online Analysis (MOA) is a software environment for implementing algorithms and running experiments for online learning from evolving data streams. MOA includes a collection of offline and online methods as well as tools for evaluation. In particular, it implements boosting, bagging, and Hoeffding Trees, all with and without Naı̈ve Bayes classifiers at the leaves. MOA supports bi-directi...
The idea of ensemble methodology is to build a predictive model by integrating multiple models. It is well-known that ensemble methods can be used for improving prediction performance. In this chapter we provide an overview of ensemble methods in classification tasks. We present all important types of ensemble methods including boosting and bagging. Combining methods and modeling issues such as...
Data Security has become a very critical part of any organizational information system. Intrusion Detection System (IDS) is used as a security measure to preserve data integrity and system availability from various attacks. This paper evaluates the performance of C4.5 classifier and its combination using bagging, boosting and stacking over NSLKDD dataset for IDS. This dataset set consists of se...
this study aimed to investigate the effects of listening strategy training on iranian efl learners listening comprehension and use of such strategies. this work, employing an experimental methodology, was conducted among 60 adult efl learners from a language institute in isfahan, iran, as participants. the participants, who were selected based on the results of a placement test, were assigned t...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید