نتایج جستجو برای: control variates
تعداد نتایج: 1329770 فیلتر نتایج به سال:
We present a general control variate method for Monte Carlo estimation of the expectations of the functionals of Lévy processes. It is based on fast numerical inversion of the cumulative distribution functions and exploits the strong correlation between the increments of the original process and Brownian motion. In the suggested control variate framework, a similar functional of Brownian motion...
Quasi-Monte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can, however, be some surprises. The optimal control variate coefficient for QMC methods is not in general the same as for MC. Usin...
Our goal is to improve variance reducing stochastic methods through better control variates. We first propose a modification of SVRG which uses the Hessian to track gradients over time, rather than to recondition, increasing the correlation of the control variates and leading to faster theoretical convergence close to the optimum. We then propose accurate and computationally efficient approxima...
This study estimates the value of the early exercise premium in American put option prices using Swedish equity options data. The value of the premium is found as the deviation of the American put price from European put-call parity, and in addition a theoretical estimate of the premium is computed. The empirically found premium is also used in a modified version of the control variate approach...
Particle-in-cell methods combined with a δf approach constitute an established and powerful method for simulating collisionless kinetic equations in e.g. plasma physics. Including collisions in such simulations requires a modified approach leading to a two-weight scheme, which has the drawback of giving a statistical error that increases with time. As in the collisionless case, this scheme can ...
We present and analyze several strategies for improving the performance of stochastic variance-reduced gradient (SVRG) methods. We first show that the convergence rate of these methods can be preserved under a decreasing sequence of errors in the control variate, and use this to derive variants of SVRG that use growing-batch strategies to reduce the number of gradient calculations required in t...
We tackle the issue of finding a good policy when the number of policy updates is limited. This is done by approximating the expected policy reward as a sequence of concave lower bounds which can be efficiently maximized, drastically reducing the number of policy updates required to achieve good performance. We also extend existing methods to negative rewards, enabling the use of control variates.
Off-policy model-free deep reinforcement learning methods using previously collected data can improve sample efficiency over on-policy policy gradient techniques. On the other hand, on-policy algorithms are often more stable and easier to use. This paper examines, both theoretically and empirically, approaches to merging onand off-policy updates for deep reinforcement learning. Theoretical resu...
In this paper we propose a novel and practical variance reduction approach for additive functionals of dependent sequences. Our combines the use control variates with minimization o...
We propose a simulation algorithm to estimate means, variances, and covariances for a set of order statistics from inverse-Gaussian (IG) distributions. Given a set of Monte Carlo data, the algorithm estimates these values simultaneously. Two types of control variates are used: internal uniform and external exponential. Simulation results show that exponential control variates work better, best ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید