Multilevel Richardson-Romberg extrapolation
نویسندگان
چکیده
We propose and analyze a Multilevel Richardson-Romberg (ML2R) estimator which combines the higher order bias cancellation of the Multistep Richardson-Romberg method introduced in [Pag07] and the variance control resulting from the stratification introduced in the Multilevel Monte Carlo (MLMC) method (see [Gil08, Hei01]). Thus, in standard frameworks like discretization schemes of diffusion processes, the root mean squared error (RMSE) ε > 0 can be achieved with our ML2R estimator with a global complexity of ε log(1/ε) instead of ε(log(1/ε)) with the standard MLMC method, at least when the weak error E [Yh]−E [Y0] of the biased implemented estimator Yh can be expanded at any order in h and ∥Yh − Y0 ∥∥ 2 = O(h 1 2 ). The ML2R estimator is then halfway between a regular MLMC and a virtual unbiased Monte Carlo. When the strong error ∥Yh − Y0 ∥∥ 2 = O(h β 2 ), β < 1, the gain of ML2R over MLMC becomes even more striking. We carry out numerical simulations to compare these estimators in two settings: vanilla and path-dependent option pricing by Monte Carlo simulation and the less classical Nested Monte Carlo simulation.
منابع مشابه
Monotonicity in Romberg Quadrature
Monotonicity of one or more derivatives of the integrand is shown to imply a corresponding property of the approximating Romberg scheme. This is of importance in connection with error estimation by majorants [6]. The monotonicity properties are derived from an elementary study of the kernel functions involved. A possible explanation is given of the monotonicity which frequently occurs in applic...
متن کاملFundamental Methods of Numerical Extrapolation With Applications
Extrapolation is an incredibly powerful technique for increasing speed and accuracy in various numerical tasks in scientific computing. As we will see, extrapolation can transform even the most mundane of algorithms (such as the Trapezoid Rule) into an extremely fast and accurate algorithm, increasing the rate of convergence by more than one order of magnitude. Within this paper, we will first ...
متن کاملMulti-step Richardson-Romberg Extrapolation: Remarks on Variance Control and Complexity
We propose a multi-step Richardson-Romberg extrapolation method for the computation of expectations Ef(X T ) of a diffusion (Xt)t∈[0,T ] when the weak time discretization error induced by the Euler scheme admits an expansion at an order R ≥ 2. The complexity of the estimator grows as R (instead of 2) and its variance is asymptotically controlled by considering some consistent Brownian increment...
متن کاملStochastic Gradient Richardson-Romberg Markov Chain Monte Carlo
Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) algorithms have become increasingly popular for Bayesian inference in large-scale applications. Even though these methods have proved useful in several scenarios, their performance is often limited by their bias. In this study, we propose a novel sampling algorithm that aims to reduce the bias of SG-MCMC while keeping the variance at a reas...
متن کاملBridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains
Abstract: We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size. While the detailed analysis was only performed for quadratic functions, we provide an explicit asymptotic expansion of the moments of the averaged SGD iterates that outlines the dependence on initial conditions, the...
متن کامل