نتایج جستجو برای: boosting
تعداد نتایج: 14818 فیلتر نتایج به سال:
Boosting approaches are based on the idea that high-quality learning algorithms can be formed by repeated use of a “weak-learner”, which is required to perform only slightly better than random guessing. It is known that Boosting can lead to drastic improvements compared to the individual weak-learner. For two-class problems it has been shown that the original Boosting algorithm, called AdaBoost...
Boosting is one of the most significant advances in machine learning for classification and regression. In its original and computationally flexible version, boosting seeks to minimize empirically a loss function in a greedy fashion. The resulted estimator takes an additive function form and is built iteratively by applying a base estimator (or learner) to updated samples depending on the previ...
0957-4174/$ see front matter 2011 Elsevier Ltd. A doi:10.1016/j.eswa.2011.04.191 ⇑ Corresponding author at: School of Management, ogy, Hefei, Anhui 230009, PR China. Tel.: +852 9799 0 E-mail address: [email protected] (G. Wang). With the rapid growth and increased competition in credit industry, the corporate credit risk prediction is becoming more important for credit-granting institutions. ...
OBJECTIVES In clinical medicine, the accuracy achieved by classification rules is often not sufficient to justify their use in daily practice. In order to improve classifiers it has become popular to combine single classification rules into a classification ensemble. Two popular boosting methods will be compared with classical statistical approaches. METHODS Using data from a clinical study o...
High dimensionality and class imbalance are two main problems that affect the quality of training datasets in software defect prediction, resulting in inefficient classification models. Feature selection and data sampling are often used to overcome these problems. Feature selection is a process of choosing the most important attributes from the original data set. Data sampling alters the data s...
An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Bagging (Breiman, 1996c) and Boosting (Freund & Schapire, 1996; Schapire, 1990) are two relatively new...
Boosting is one of the most significant developments in machine learning. This paper studies the rate of convergence of L2Boosting, which is tailored for regression, in a high-dimensional setting. Moreover, we introduce so-called “post-Boosting”. This is a post-selection estimator which applies ordinary least squares to the variables selected in the first stage by L2Boosting. Another variant is...
Boosting and Bagging, as two representative approaches to learning classiier committees, have demonstrated great success, especially for decision tree learning. They repeatedly build diierent classiiers using a base learning algorithm by changing the distribution of the training set. Sasc, as a diierent type of committee learning method, can also signiicantly reduce the error rate of decision t...
Heterologous priming-boosting vaccination regimens involving priming with plasmid DNA antigen constructs and inoculating (boosting) with the same recombinant antigen expressed in replication-attenuated poxviruses have recently been demonstrated to induce immunity, based on CD4(+)- and CD8(+)-T-cell responses, against several diseases in both rodents and primates. We show that similar priming-bo...
Several rating agencies such as Standard & Poor's (S&P), Moody's and Fitch Ratings have evaluated firms’ credit rating. Since lots of fees are required by the agencies and sometimes the timely default risk of the firms is not reflected, it can be helpful for stakeholders if the credit ratings can be predicted before the agencies publish them. However, it is not easy to make an accurate predicti...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید