Kernel Selection for Convergence and Efficiency in Markov Chain Monte Carol

نویسندگان

  • Christopher C. J. Potter
  • CHRISTOPHER C.J. POTTER
  • Jack Schaeffer
  • Michael Widom
چکیده

Markov Chain Monte Carlo (MCMC) is a technique for sampling from a target probability distribution, and has risen in importance as faster computing hardware has made possible the exploration of hitherto difficult distributions. Unfortunately, this powerful technique is often misapplied by poor selection of transition kernel for the Markov chain that is generated by the simulation. Some kernels are used without being checked against the convergence requirements for MCMC (total balance and ergodicity), but in this work we prove the existence of a simple proxy for total balance that is not as demanding as detailed balance, the most widely used standard. We show that, for discrete-state MCMC, that if a transition kernel is equivalent when it is “reversed” and applied to data which is also “reversed”, then it satisfies total balance. We go on to prove that the sequential single-variable update Metropolis kernel, where variables are simply updated in order, does indeed satisfy total balance for many discrete target distributions, such as the Ising model with uniform exchange constant. Also, two well-known papers by Gelman, Roberts, and Gilks (GRG)[1, 2] have proposed the application of the results of an interesting mathematical proof to the realistic optimization of Markov Chain Monte Carlo computer simulations. In particular, they advocated tuning the simulation parameters to select an acceptance ratio of 0.234 . In this paper, we point out that although the proof is valid, its result’s application to practical computations is not advisable, as the simulation algorithm considered in the proof is so inefficient that it produces very poor results under all circumstances. The algorithm used by Gelman, Roberts, and Gilks is also shown to introduce subtle time-dependent correlations into the simulation of intrinsically independent variables. These correlations are of particular interest since they will be present in all simulations that use multi-dimensional MCMC moves. KERNEL SELECTION FOR CONVERGENCE AND EFFICIENCY IN MARKOV CHAIN MONTE CARLO CONTENTS List of Tables i List of Figures ii Acknowledgements iii

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Stochastic algorithm to solve multiple dimensional Fredholm integral equations of the second kind

In the present work‎, ‎a new stochastic algorithm is proposed to solve multiple dimensional Fredholm integral equations of the second kind‎. ‎The solution of the‎ integral equation is described by the Neumann series expansion‎. ‎Each term of this expansion can be considered as an expectation which is approximated by a continuous Markov chain Monte Carlo method‎. ‎An algorithm is proposed to sim...

متن کامل

A cautionary tale on the efficiency of some adaptive Monte Carlo Schemes

There is a growing interest in the literature for adaptive Markov Chain Monte Carlo methods based on sequences of random transition kernels {Pn} where the kernel Pn is allowed to have an invariant distribution πn not necessarily equal to the distribution of interest π (target distribution). These algorithms are designed such that as n → ∞, Pn converges to P , a kernel that has the correct invar...

متن کامل

Selection of a MCMC simulation strategy via an entropy convergence criterion

In MCMC methods, such as the Metropolis-Hastings (MH) algorithm, the Gibbs sampler, or recent adaptive methods, many different strategies can be proposed, often associated in practice to unknown rates of convergence. In this paper we propose a simulationbased methodology to compare these rates of convergence, grounded on an entropy criterion computed from parallel (i.i.d.) simulated Markov chai...

متن کامل

Parallel hierarchical sampling: a practical multiple-chains sampler for Bayesian model selection

This paper introduces the parallel hierarchical sampler (PHS), a Markov chain Monte Carlo algorithm using several chains simultaneously. The connections between PHS and the parallel tempering (PT) algorithm are illustrated, convergence of PHS joint transition kernel is proved and and its practical advantages are emphasized. We illustrate the inferences obtained using PHS, parallel tempering and...

متن کامل

Strategies for Speeding Markov Chain Monte Carlo Algorithms

Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common diiculties with MCMC algorithms are slow convergence and long run-times, which are often closely related. Algorithm convergence can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution,...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015