نتایج جستجو برای: vacuum bagging
تعداد نتایج: 49193 فیلتر نتایج به سال:
Many applications aim to learn a high dimensional parameter of a data generating distribution based on a sample of independent and identically distributed observations. For example, the goal might be to estimate the conditional mean of an outcome given a list of input variables. In this prediction context, bootstrap aggregating (bagging) has been introduced as a method to reduce the variance of...
Abstract: Bagging is a device intended for reducing the prediction error of learning algorithms. In its simplest form, bagging draws bootstrap samples from the training sample, applies the learning algorithm to each bootstrap sample, and then averages the resulting prediction rules. More generally, the resample size M may be different from the original sample size N , and resampling can be done...
Machine Learning tools are increasingly being applied to analyze data from microarray experiments. These include ensemble methods where weighted votes of constructed base classifiers are used to classify data. We compare the performance of AdaBoost, bagging and BagBoost on gene expression data from the yeast cell cycle. AdaBoost was found to be more effective for the data than bagging. BagBoost...
The problem of large-scale simultaneous hypothesis testing is revisited. Bagging and subagging procedures are put forth with the purpose of improving the discovery power of the tests. The procedures are implemented in both simulated and real data. It is shown that bagging and subagging significantly improve power at the cost of a small increase in false discovery rate with the proposed ‘maximum...
We use amortized inference in conjunction with implicit models to approximate the bootstrap distribution over model parameters. We call this the amortized bootstrap, as statistical strength is shared across dataset replicates through a metamodel. At test time, we can then perform amortized bagging by drawing multiple samples from the implicit model. We find amortized bagging outperforms bagging...
In bagging Bre94a] one uses bootstrap replicates of the training set Efr79, ET93] to improve a learning algorithm's performance, often by tens of percent. This paper presents several ways that stacking Wol92b, Bre92] can be used in concert with the bootstrap procedure to achieve a further improvement on the performance of bagging for some regression problems. In particular, in some of the work ...
Ensemble learning (process of combining multiple models into a single decision) is an effective tool for improving the classification performance of inductive models. While ideal for domains like bioinformatics with many challenging datasets, many ensemble methods, such as Bagging and Boosting, do not take into account the high-dimensionality (large number of features per instance) that is comm...
Bagging is a device intended for reducing the prediction error of learning algorithms. In its simplest form, bagging draws bootstrap samples from the training sample, applies the learning algorithm to each bootstrap sample, and then averages the resulting prediction rules. Heuristically, the averaging process should reduce the variance component of the prediction error. This is supported by emp...
Intuitively, we expect that averaging — or bagging — different regressors with low correlation should smooth their behavior and be somewhat similar to regularization. In this note we make this intuition precise. Using an almost classical definition of stability, we prove that a certain form of averaging provides generalization bounds with a rate of convergence of the same order as Tikhonov regu...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید