نتایج جستجو برای: logitboost

تعداد نتایج: 116  

Journal: :CoRR 2014
Sunsern Cheamanunkul Evan Ettinger Yoav Freund

The sensitivity of Adaboost to random label noise is a well-studied problem. LogitBoost, BrownBoost and RobustBoost are boosting algorithms claimed to be less sensitive to noise than AdaBoost. We present the results of experiments evaluating these algorithms on both synthetic and real datasets. We compare the performance on each of datasets when the labels are corrupted by different levels of i...

2015
Stamatis Karlos Nikos Fazakis Sotiris B. Kotsiantis Kyriakos N. Sgarbas

Semi-supervised classification methods are based on the use of unlabeled data in combination with a smaller set of labeled examples, in order to increase the classification rate compared with the supervised methods, in which the total training is executed only by the usage of labeled data. In this work, a self-train Logitboost algorithm is presented. The self-train process improves the results ...

2013
Philip M. Long Rocco A. Servedio

A consistent loss function for multiclass classification is one such that for any source of labeled examples, any tuple of scoring functions that minimizes the expected loss will have classification accuracy close to that of the Bayes optimal classifier. While consistency has been proposed as a desirable property for multiclass loss functions, we give experimental and theoretical results exhibi...

2009
Miklós Kurucz Dávid Siklósi István Bíró Péter Csizsek Zsolt Fekete Róbert Iwatt Tamás Kiss Adrienn Szabó

We describe the method used in our final submission to KDD Cup 2009 as well as a selection of promising directions that are generally believed to work well but did not justify our expectations. Our final method consists of a combination of a LogitBoost and an ADTree classifier with a feature selection method that, as shaped by the experiments we have conducted, have turned out to be very differ...

2012
Shian-Chang Huang Cheng-Feng Wu

Personal credit scoring on credit cards has been a critical issue in the banking industry. The bank with the most accurate estimation of its customer credit quality will be the most profitable. The study aims to compare quality prediction models from data mining methods, and improve traditional models by using boosting and genetic algorithms (GA). The predicting models used are instant-based cl...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه شیراز - دانشکده برق و کامپیوتر 1392

طبقه بندی تقویت تطبیقی یک روش شناخته شده و موثر برای جمع آوری ویژگی های مثبت گروهی از یادگیرهای ضعیف موازی است; با این حال، از حساسیت بالا به داده های نویزی و همچنین آموزش تعداد زیادی از یادگیرهای ضعیف رنج می برد. در اینجا، یک روش جدید به منظور کاهش تعداد یادگیرهای تطبیقی با استفاده از روش گرام اشمیت به صورت یک طرح وزن دهی جدید که منجر به متعامد شدن توزیع تمام یادگیرهای تنبل می شود پیشنهاد شده ...

Journal: :Informatica (Slovenia) 2005
Sotiris B. Kotsiantis Panayiotis E. Pintelas

The ensembles of simple Bayesian classifiers have traditionally not been a focus of research. The reason is that simple Bayes is an extremely stable learning algorithm and most ensemble techniques such as bagging is mainly variance reduction techniques, thus not being able to benefit from its integration. However, simple Bayes can be effectively used in ensemble techniques, which perform also b...

2008
Albert Orriols-Puig Jorge Casillas Ester Bernadó-Mansilla

This chapter gives insight in the use of Genetic-Based Machine Learning (GBML) for supervised tasks. Five GBML systems which represent different learning methodologies and knowledge representations in the GBML paradigm are selected for the analysis: UCS, GAssist, SLAVE, Fuzzy AdaBoost, and Fuzzy LogitBoost. UCS and GAssist are based on a non-fuzzy representation, while SLAVE, Fuzzy AdaBoost, an...

Journal: :International Journal of Intelligent Information Systems 2013

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید