نتایج جستجو برای: bootstrap aggregating

تعداد نتایج: 18325  

2007
Jon Atli Benediktsson Jocelyn Chanussot Mathieu Fauvel

In this paper, we present some recent developments of Multiple Classifiers Systems (MCS) for remote sensing applications. Some standard MCS methods (boosting, bagging, consensus theory and random forests) are briefly described and applied to multisource data (satellite multispectral images, elevation, slope and aspect data) for landcover classification. In a second part, special attention is gi...

1998
Zijian Zheng Geoffrey I. Webb

Classi er committee learning methods generate multiple classi ers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the nal classication. Two such methods, Bagging and Boosting, have shown great success with decision tree learning. They create di erent classi ers by modifying the distribution of the training set. This paper stu...

2016
David Windridge Rajagopal Nagarajan

We set out a strategy for quantizing attribute bootstrap aggregation to enable variance-resilient quantum machine learning. To do so, we utilise the linear decomposability of decision boundary parameters in the Rebentrost et al. Support Vector Machine to guarantee that stochastic measurement of the output quantum state will give rise to an ensemble decision without destroying the superposition ...

2013
Minh Quang Nhat Pham Minh Le Nguyen Akira Shimazu

Textual entailment recognition is a fundamental problem in natural language understanding. The task is to determine whether the meaning of one text can be inferred from the meaning of the other one. At NTCIR-10 RITE-2 this year – our second participation in this challenge, we use the modified version of our RTE system used at NTCIR-9 RITE for four subtasks for Japanese: BC, MC, ExamBC, and Unit...

2014
Nikita Joshi Shweta Srivastava

Using ensemble methods is one of the general strategies to improve the accuracy of classifier and predictor. Bagging is one of the suitable ensemble learning methods. Ensemble learning is a simple, useful and effective metaclassification methodology that combines the predictions from multiple base classifiers (or learners). In this paper we show a comparative study of different classifiers (Dec...

2007
Pedro Domingos

Although Bayesian model averaging (BMA) is in principle the optimal method for combining learned models, it has received relatively little attention in the machine learning literature. This article describes an extensive empirical study of the application of BMA to rule induction. BMA is applied to a variety of tasks and compared with more ad hoc alternatives like bagging. In each case, BMA typ...

2013
Alexander Schindler Andreas Rauber

We propose a cross-modal approach based on separate audio and image data-sets to identify the artist of a given music video. The identification process is based on an ensemble of two separate classifiers. Audio content classification is based on audio features derived from the Million Song Dataset (MSD). Face recognition is based on Local Binary Patterns (LBP) using a training-set of artist por...

1997
Kai Ming Ting Ian H. Witten

In this paper, we investigate the method of stacked generalization in combining models derived from diierent subsets of a training dataset by a single learning algorithm, as well as diierent algorithms. The simplest way to combine predictions from competing models is majority vote, and the eeect of the sampling regime used to generate training subsets has already been studied in this context|wh...

Journal: :International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 2011
Krzysztof Trawinski Oscar Cordón Arnaud Quirin

In this work, we conduct a study considering a fuzzy rule-based multiclassification system design framework based on Fuzzy Unordered Rule Induction Algorithm (FURIA). This advanced method serves as the fuzzy classification rule learning algorithm to derive the component classifiers considering bagging and feature selection. We develop an exhaustive study on the potential of bagging and feature ...

2016
Ioannis C. Konstantakopoulos Lillian J. Ratliff Ming Jin Costas Spanos S. Shankar Sastry

Given a non-cooperative, continuous game, we describe a framework for parametric utility learning. Using heteroskedasticity inference, we adapt a Constrained Feasible Generalized Least Squares (cFGLS) utility learning method in which estimator variance is reduced, unbiased, and consistent. We extend our utility learning method using bootstrapping and bagging. We show the performance of the prop...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید