Tutorial on Structured Continuous-Time Markov Processes

نویسندگان

  • Christian R. Shelton
  • Gianfranco Ciardo
چکیده

A continuous-time Markov process (CTMP) is a collection of variables indexed by a continuous quantity, time. It obeys the Markov property that the distribution over a future variable is independent of past variables given the state at the present time. We introduce continuous-time Markov process representations and algorithms for filtering, smoothing, expected sufficient statistics calculations, and model estimation, assuming no prior knowledge of continuous-time processes but some basic knowledge of probability and statistics. We begin by describing “flat” or unstructured Markov processes and then move to structured Markov processes (those arising from state spaces consisting of assignments to variables) including Kronecker, decision-diagram, and continuous-time Bayesian network representations. We provide the first connection between decision-diagrams and continuoustime Bayesian networks. 1. Tutorial Goals This tutorial is intended for readers interested in learning about continuous-time Markov processes, and in particular compact or structured representations of them. It is assumed that the reader is familiar with general probability and statistics and has some knowledge of discrete-time Markov chains and perhaps hidden Markov model algorithms. While this tutorial deals only with Markovian systems, we do not require that all variables be observed. Thus, hidden variables can be used to model long-range interactions among observations. In these models, at any given instant the assignment to all state variables is sufficient to describe the future evolution of the system. The variables themselves real-valued (continuous) times. We consider evidence or observations that can be regularly spaced, irregularly spaced, or continuous over intervals. These evidence patterns can change by model variable and time. We deal exclusively with discrete-state continuous-time systems. Real-valued variables are important in many situations, but to keep the scope manageable, we will not treat them here. We refer to the work of Särkkä (2006) for a machine-learning-oriented treatment of filtering and smoothing in such models. The literature on parameter estimation is more scattered. We will further constrain our discussion to systems with finite states, although many of the concepts can be extended to countably infinite state systems. We will be concerned with two main problems: inference and learning (parameter estimation). These were chosen as those most familiar to and applicable for researchers in artificial intelligence. At points we will also discuss the computation of steady-state properties, especially for model for which most research concentrates on this computation. c ©2014 AI Access Foundation. All rights reserved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On $L_1$-weak ergodicity of nonhomogeneous continuous-time Markov‎ ‎processes

‎In the present paper we investigate the $L_1$-weak ergodicity of‎ ‎nonhomogeneous continuous-time Markov processes with general state‎ ‎spaces‎. ‎We provide a necessary and sufficient condition for such‎ ‎processes to satisfy the $L_1$-weak ergodicity‎. ‎Moreover‎, ‎we apply‎ ‎the obtained results to establish $L_1$-weak ergodicity of quadratic‎ ‎stochastic processes‎.

متن کامل

Structured Analysis Approaches for Large Markov

The tutorial introduces structured analysis approaches for continuous time Markov chains (CTMCs) which are a means to extend the size of analyzable state spaces signiicantly compared with conventional techniques. It is shown how generator matrices of large CTMCs can be represented in a very compact form, how this representation can be exploited in numerical solution techniques and how numerical...

متن کامل

Continuous approximation of collective system behaviour: A tutorial

In this paper we present an overview of the field of deterministic approximation of Markov processes, both in discrete and continuous time. We will discuss mean field approximation of discrete time Markov chains and fluid approximation of continuous time Markov chains, considering the cases in which the deterministic limit process lives in continuous time or discrete time. We also consider some...

متن کامل

Processos de Decisão de Markov: um tutorial

There are situations where decisions must be made in sequence, and the result of each decision is not clear to the decision maker. These situations can be formulated mathematically as Markov decision processes, and given the probabilities of each value, it is possible to determine a policy that maximizes the expected outcome of a sequence of decisions. This tutorial explains Markov decision pro...

متن کامل

Solving Structured Continuous-Time Markov Decision Processes

We present an approach to solving structured continuous-time Markov decision processes. We approximate the the optimal value function by a compact linear form, resulting in a linear program. The main difficulty arises from the number of constraints that grow exponentially with the number of variables in the system. We exploit the representation of continuous-time Bayesian networks (CTBNs) to de...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Artif. Intell. Res.

دوره 51  شماره 

صفحات  -

تاریخ انتشار 2014