نتایج جستجو برای: ensemble learning

تعداد نتایج: 635149  

1992

Figure 1. A stylized depiction of how to combine the two generalizers G 1 and G 2 via stacked generalization. A learning set L is symbolically depicted by the full ellipse. We want to guess what output corresponds to the question q. To do this we create a CVPS of L; one of these partitions is shown, splitting L into {(x, y)} and {L-(x, y)}. By training both G 1 and G 2 on {L-(x, y)}, asking bot...

2005
Surendra K. Singhi Huan Liu

Ensemble learning is a powerful learning approach that combines multiple classifiers to improve prediction accuracy. An important decision while using an ensemble of classifiers is to decide upon a way of combining the prediction of its base classifiers. In this paper, we introduce a novel grading-based algorithm for model combination, which uses cost-sensitive learning in building a meta-learn...

2003
Jörg D. Wichard Christian Merkwirth

In the context of ensemble learning for regression problems, we study the effect of building ensembles from different model classes. Tests on real and simulated data sets show that this approach can improve model accuracy compared to ensembles from a single model class.

2000
Gunnar Rätsch Bernhard Schölkopf Alexander J. Smola Sebastian Mika Takashi Onoda Klaus-Robert Müller

2015
Zejin Ding ZEJIN DING YANQING ZHANG

In this dissertation, the problem of learning from highly imbalanced data is studied. Imbalance data learning is of great importance and challenge in many real applications. Dealing with a minority class normally needs new concepts, observations and solutions in order to fully understand the underlying complicated models. We try to systematically review and solve this special learning task in t...

2007
Krzysztof Dembczynski Salvatore Greco Wojciech Kotlowski Roman Slowinski

In the paper, we present the relationship between loss functions and confirmation measures. We show that population minimizers for weighted loss functions correspond to confirmation measures. This result can be used in construction of machine learning methods, particularly, ensemble methods.

2013
Annalina Caputo Pierpaolo Basile Giovanni Semeraro

This paper describes the UNIBA participation in the Semantic Textual Similarity (STS) core task 2013. We exploited three different systems for computing the similarity between two texts. A system is used as baseline, which represents the best model emerged from our previous participation in STS 2012. Such system is based on a distributional model of semantics capable of taking into account also...

Journal: :Neurocomputing 2013
Adel Ghazikhani Reza Monsefi Hadi Sadoghi Yazdi

Concept drift (non-stationarity) and class imbalance are two important challenges for supervised classifiers. “Concept drift” (or non-stationarity) refers to changes in the underlying function being learnt, and class imbalance is a vast difference between the numbers of instances in different classes of data. Class imbalance is an obstacle for the efficiency of most classifiers. Research on cla...

Journal: :CIT 2017
Samira Ellouze Maher Jaoua Lamia Hadrich Belguith

In this article, we propose a method of text summary's content and linguistic quality evaluation that is based on a machine learning approach. This method operates by combining multiple features to build predictive models that evaluate the content and the linguistic quality of new summaries (unseen) constructed from the same source documents as the summaries used in the training and the validat...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید