نتایج جستجو برای: bootstrap aggregating
تعداد نتایج: 18325 فیلتر نتایج به سال:
We present a novel approach to learn binary classifiers when only positive and unlabeled instances are available (PU learning). This problem is routinely cast as a supervised task with label noise in the negative set. We use an ensemble of SVM models trained on bootstrap subsamples of the training data for increased robustness against label noise. The approach can be considered in a bagging fra...
Verification problems are usually posted as a 2-class problem and the objective is to verify if an observation belongs to a class, say, A or its complement A’. However, we find that in a computer-assisted language learning application, because of the relatively low reliability of phoneme verification — with an equal-error-rate of more than 30% — a system built on conventional phoneme verificati...
Inductive learning searches an optimal hypothesis that minimizes a given loss function. It is usually assumed that the simplest hypothesis that fits the data is the best approximate to an optimal hypothesis. Since finding the simplest hypothesis is NP-hard for most representations, we generally employ various heuristics to search its closest match. Computing these heuristics incurs significant ...
The problem of combining predictors to increase accuracy (often called ensemble learning) has been studied broadly in the machine learning community for both classification and regression tasks. The design of an ensemble is based on the individual accuracy of the predictors and also how different they are from one another. There is a significant body of literature on how to design and measure d...
We consider Bayesian mixture approaches, where a predictor is constructed by forming a weighted average of hypotheses from some space of functions. While such procedures are known to lead to optimal predictors in several cases, where sufficiently accurate prior information is available, it has not been clear how they perform when some of the prior assumptions are violated. In this paper we esta...
We propose a new method for training an ensemble of neural networks. A population of networks is created and maintained such that more probable networks replicate and less probable networks vanish. Each individual network is updated using random weight changes. This produces a diversity among the networks which is important for the ensemble prediction using the population. The method is compare...
It is widely believed that the prediction accuracy of decision tree models is invariant under any strictly monotone transformation of the individual predictor variables. However, this statement may be false when predicting new observations with values that were not seen in the training-set and are close to the location of the split point of a tree rule. The sensitivity of the prediction error t...
In this work, we propose the use of support vector regression ensembles for wind power prediction. Ensemble methods often yield better classification and regression accuracy than classical machine learning algorithms and reduce the computational cost. In the field of wind power generation, the integration into the smart grid is only possible with a precise forecast computed in a reasonable time...
We present a theoretical and empirical comparative analysis of the two dominant categories of approaches in Chinese word segmentation: word-based models and character-based models. We show that, in spite of similar performance overall, the two models produce different distribution of segmentation errors, in a way that can be explained by theoretical properties of the two models. The analysis is...
This paper proposes a method for constructing ensembles of decision trees: GRASP Forest. This method uses the metaheuristic GRASP, usually used in optimization problems, to increase the diversity of the ensemble. While Random Forest increases the diversity by randomly choosing a subset of attributes in each tree node, GRASP Forest takes into account all the attributes, the source of randomness ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید