نتایج جستجو برای: adaboost
تعداد نتایج: 2456 فیلتر نتایج به سال:
This paper proposes a learning scheme based still image super-resolution reconstruction algorithm. Superresolution reconstruction is proposed as a binary classification problem and can be solved by conditional class probability estimation. Assuming the probability takes the form of additive logistic regression function, AdaBoost algorithm is used to predict the probability. Experiments on face ...
We first present a general risk bound for ensembles that depends on the Lp norm of the weighted combination of voters which can be selected from a continuous set. We then propose a boosting method, called QuadBoost, which is strongly supported by the general risk bound and has very simple rules for assigning the voters’ weights. Moreover, QuadBoost exhibits a rate of decrease of its empirical e...
Real Adaboost ensembles with weighted emphasis (RA-we) on erroneous and critical (near the classification boundary) samples have recently been proposed, leading to improved performance when an adequate combination of these terms is selected. However, finding the optimal emphasis adjustment is not an easy task. In this paper, we propose to make a fusion of the outputs of RA-we ensembles trained ...
This paper introduces a visual zebra crossing detector based on the Viola-Jones approach. The basic properties of this cascaded classifier and the use of integral images are explained. Additional preand postprocessing for this task are introduced and evaluated.
Adaboost is a machine learning algorithm that builds a series of small decision trees, adapting each tree to predict difficult cases missed by the previous trees and combining all trees into a single model. We will discuss the AdaBoost methodology and introduce the extension called Real AdaBoost. Real AdaBoost comes from a strong academic pedigree: its authors are pioneers of machine learning a...
We congratulate the authors for their interesting papers on boosting and related topics. Jiang deals with the asymptotic consistency of Adaboost. Lugosi and Vayatis study the convex optimization of loss functions associated with boosting. Zhang studies the loss functions themselves. Their results imply that boosting-like methods can reasonably be expected to converge to Bayes classifiers under ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید