نتایج جستجو برای: boosting

تعداد نتایج: 14818  

Journal: :CoRR 2017
Haozhen Wu

Excellent ranking power along with well calibrated probability estimates are needed in many classification tasks. In this paper, we introduce a technique, Calibrated Boosting-Forest1 that captures both. This novel technique is an ensemble of gradient boosting machines that can support both continuous and binary labels. While offering superior ranking power over any individual regression or clas...

Journal: :Intelligent Information Management 2010
Xiaowei Sun Hongbo Zhou

Boosting is an effective classifier combination method, which can improve classification performance of an unstable learning algorithm. But it dose not make much more improvement of a stable learning algorithm. In this paper, multiple TAN classifiers are combined by a combination method called Boosting-MultiTAN that is compared with the Boosting-BAN classifier which is boosting based on BAN com...

2003
K. Varmuza Ping He Kai-Tai Fang

Boosting is a machine learning algorithm that is not well known in chemometrics. We apply boosting tree to the classification of mass spectral data. In the experiment, recognition of 15 chemical substructures from mass spectral data have been taken into account. The performance of boosting is very encouraging. Compared with previous result, boosting significantly improves the accuracy of classi...

1999
Robert E. Schapire

Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm , we brieey survey theoretical work on boosting including analyses of AdaBoost's training error and generalization error, connections between boosting and game theory, methods of estimating probabilities using boosting, and extensions of AdaBoost for multiclass c...

Journal: :Intell. Data Anal. 2001
Aleksandar Lazarevic Zoran Obradovic

Combining multiple classifiers is an effective technique for improving classification accuracy by reducing the variance through manipulating the training data distributions. In many large-scale data analysis problems involving heterogeneous databases with attribute instability, however, standard boosting methods do not improve local classifiers (e.g. k-nearest neighbors) due to their low sensit...

2000
Manfred Warmuth Takashi Onoda Steven Lemm

Abstract Boosting algorithms like AdaBoost and Arc-GV are iterative strategies to minimize a constrained objective function, equivalent to Barrier algorithms. Based on this new understanding it is shown that convergence of Boosting-type algorithms becomes simpler to prove and we outline directions to develop further Boosting schemes. In particular a new Boosting technique for regression – -Boos...

2002
Ludmila I. Kuncheva Christopher J. Whitaker

We look at three variants of the boosting algorithm called here Aggressive Boosting, Conservative Boosting and Inverse Boosting. We associate the diversity measure Q with the accuracy during the progressive development of the ensembles, in the hope of being able to detect the point of “paralysis” of the training, if any. Three data sets are used: the artificial Cone-Torus data and the UCI Pima ...

2000
Manfred Warmuth Takashi Onoda Steven Lemm

Abstract Boosting algorithms like AdaBoost and Arc-GV are iterative strategies to minimize a constrained objective function, equivalent to Barrier algorithms. Based on this new understanding it is shown that convergence of Boosting-type algorithms becomes simpler to prove and we outline directions to develop further Boosting schemes. In particular a new Boosting technique for regression – -Boos...

1999
Robert E. Schapire

Boosting is a general method for improving the accuracy of any given learning algorithm. Focusing primarily on the AdaBoost algorithm , we brieey survey theoretical work on boosting including analyses of AdaBoost's training error and generalization error, connections between boosting and game theory, methods of estimating probabilities using boosting, and extensions of AdaBoost for multiclass c...

2006
Masanori KAWAKITA Shinto EGUCHI Masanori Kawakita

We propose a local boosting method in classification problems borrowing from an idea of the local likelihood method. The proposed method includes a simple device to localization for computational feasibility. We proved the Bayes risk consistency of the local boosting in the framework of PAC learning. Inspection of the proof provides a useful viewpoint for comparing the ordinary boosting and the...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید