نتایج جستجو برای: variance reduction

تعداد نتایج: 591030  

2016
Kartic Subr Derek Nowrouzezahrai Wojciech Jarosz Jan Kautz Kenny Mitchell

respectively. In either case, choosing α̂(ω) = δ(ω) results in a variance proportional to that of the integrand. These equations provide different insight into the choice of importance function for variance reduction. Eq. 2 suggests that ideally, g(x) = 1/α(x) should be chosen so that α̂(ω) contains all its energy at frequencies where the square of the integrand has no energy. Eq. 3, on the other...

Journal: :ECEASST 2015
Cyrille Jégourel Axel Legay Sean Sedwards Louis-Marie Traonouez

Rare properties remain a challenge for statistical model checking (SMC) due to the quadratic scaling of variance with rarity. We address this with a variance reduction framework based on lightweight importance splitting observers. These expose the model-property automaton to allow the construction of score functions for high performance algorithms. The confidence intervals defined for importanc...

2005
Yuk Lai Suen Prem Melville Raymond J. Mooney

Gradient Boosting and bagging applied to regressors can reduce the error due to bias and variance respectively. Alternatively, Stochastic Gradient Boosting (SGB) and Iterated Bagging (IB) attempt to simultaneously reduce the contribution of both bias and variance to error. We provide an extensive empirical analysis of these methods, along with two alternate bias-variance reduction approaches — ...

Journal: :Inf. Process. Manage. 2014
Jianguo Lu Hao Wang

The norm of practice in estimating graph properties is to use uniform random node (RN) samples whenever possible. Many graphs are large and scale-free, inducing large degree variance and estimator variance. This paper shows that random edge (RE) sampling and the corresponding harmonic mean estimator for average degree can reduce the estimation variance significantly. First, we demonstrate that ...

Journal: :Physics in medicine and biology 1999
R D Badawi M P Miller D L Bailey P K Marsden

In positron emission tomography (PET), random coincidence events must be removed from the measured signal in order to obtain quantitatively accurate data. The most widely implemented technique for estimating the number of random coincidences on a particular line of response is the delayed coincidence channel method. Estimates obtained in this way are subject to Poisson noise, which then propaga...

2015
James Neufeld Dale Schuurmans Michael H. Bowling

We present a Monte Carlo integration method, antithetic Markov chain sampling (AMCS), that incorporates local Markov transitions in an underlying importance sampler. Like sequential Monte Carlo sampling, the proposed method uses a sequence of Markov transitions to guide the sampling toward influential regions of the integrand (modes). However, AMCS differs in the type of transitions that may be...

Journal: :CoRR 2017
Tianbing Xu Qiang Liu Jian Peng

Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) technique [1] to model-free p...

2010
Jack P. C. Kleijnen Ad A. N. Ridder Reuven Y. Rubinstein

Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the introduction of computers. This increased computer power has stimulated simulation analysts to develo...

Journal: :CoRR 2018
Zalán Borsos Andreas Krause Kfir Y. Levy

Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform importance sampling techniques, which take the structure of the dataset into account. In this work, we investigate a recently pr...

Journal: :CoRR 2016
Chao Zhang Zebang Shen Hui Qian Tengfei Zhou

Alternating Direction Method of Multipliers (ADMM) is a popular method in solving Machine Learning problems. Stochastic ADMM was firstly proposed in order to reduce the per iteration computational complexity, which is more suitable for big data problems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a fast convergence rate, such as SAG-ADMM an...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید