Class Notes: Monte Carlo Methods Week 1, Direct Sampling

نویسنده

  • Jonathan Goodman
چکیده

Monte Carlo means using random numbers to compute something that itself is not random. For example, suppose X is a random variable with some distribution, V (x) is some function, and we want to know A = E[V (X)]. There may be another random variable Y with another distribution, and another function W (y), so that A = E[W (Y )]. Simulation is the process of generating random variables with the X distribution for their own sakes. Much of the effort in Monte Carlo goes into looking for alternative ways to estimate the same number. The alternatives may have lower variance or be easier to evaluate. This is a difference between Monte Carlo and simulation. There are many scientific problems for which Monte Carlo is among the best known solution methods. Most of these arise through the curse of dimensionality. In its simplest form, this curse is that it is impractical to create a mesh for numerical integration or for PDE solving in high dimensions. A mesh with n points on a side in d dimensions has n mesh points in total. This is impractical, for example, if n = 10 and d = 50. A more general version of the curse is that it is impossible to represent a generic function f(x) in high dimensions. Consider a general polynomial in d variables. A polynomial is a linear combination of monomials. The number of monomials x1 1 · · ·x kd d of degree ≤ n is ( n+d n ) . This is the number of coefficients you need to represent a general polynomial of degree n. For example, you need about 10, 000 coefficients for degree 4 in 20 variables, and about thirty million coefficients for a degree 10 polynomial in 20 variables. For example, dynamic programming is an algorithm that requires you to represent the “value function” (for economists) or the “cost to go function” (for engineers). Dynamic programming is impractical except for low dimensional problems. On the other hand, if f(x) is a probability density, it may be possible to represent f using a large number of samples. A sample of f is a random variable X that has f as its probability density. If Xk for k = 1, ..., N is a collection of samples, then

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Class notes: Monte Carlo methods Week 4, Markov chain Monte Carlo analysis

Error bars for MCMC are harder than for direct Monte Carlo. It is harder to estimate error bars from MCMC data, and it is harder to predict them from theory. The estimation and theory are more important because MCMC estimation errors can be much larger than you might expect based on the run time. The fundamental formula for MCMC error bars is as follows. Suppose Xk is a sequence of MCMC samples...

متن کامل

Class notes: Monte Carlo methods Week 3, Markov chain Monte Carlo

Markov chain Monte Carlo, or MCMC, is a way to sample probability distributions that cannot be sampled practically using direct samplers. Most complex probability distributions in more than a few variables are are sampled in this way. For us, a stationary Markov chain is a random sequence X1, X2, . . ., where Xk+1 = M(Xk, ξk), where M(x, ξ) is a fixed function and the inputs ξ are i.i.d. random...

متن کامل

Direct Simulation Monte Carlo: Theory, Methods, and Open Challenges

These lecture notes present the basic theory and methods for the Direct Simulation Monte Carlo (DSMC) algorithm. Some of the open challenges in the treatment of complex, multi-scale flows are also discussed.

متن کامل

Simulation Efficiency and an Introduction to Variance Reduction Methods

In these notes we discuss the efficiency of a Monte-Carlo estimator. This naturally leads to the search for more efficient estimators and towards this end we describe some simple variance reduction techniques. In particular, we describe common random numbers, control variates, antithetic Variates and conditional Monte-Carlo, all of which are designed to reduce the variance of our Monte-Carlo es...

متن کامل

Markov Chain Monte Carlo Methods for Statistical Inference

These notes provide an introduction to Markov chain Monte Carlo methods and their applications to both Bayesian and frequentist statistical inference. Such methods have revolutionized what can be achieved computationally, especially in the Bayesian paradigm. The account begins by discussing ordinary Monte Carlo methods: these have the same goals as the Markov chain versions but can only rarely ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013