نتایج جستجو برای: continuous time markov chain

تعداد نتایج: 2344467  

2005
David Brydges Remco van der Hofstad

We investigate the local times of a continuous-time Markov chain on an arbitrary discrete state space. For fixed finite range of the Markov chain, we derive an explicit formula for the joint density of all local times on the range, at any fixed time. We use standard tools from the theory of stochastic processes and finite-dimensional complex calculus. We apply this formula in the following dire...

2007
DAVID BRYDGES REMCO VAN DER HOFSTAD

We investigate the local times of a continuous-time Markov chain on an arbitrary discrete state space. For fixed finite range of the Markov chain, we derive an explicit formula for the joint density of all local times on the range, at any fixed time. We use standard tools from the theory of stochastic processes and finite-dimensional complex calculus. We apply this formula in the following dire...

2009
Holger Hermanns Kim G Larsen Jean-Francois Raskin Alexandre David

This deliverable describes the first year results of the QUASIMODO project on analysing quantitative systems. Keyword list: Markov chain, Markov decision process, probabilistic bisimulation, probabilistic simulations; probabilistic timed automata; priced probabilistic timed automata; Continuous Time Markov Chains, inhomogeneous CTMC, Infinite state CTMC, counter-example guided abstractionrefine...

Morteza Khodabin,

In this paper, the two parameter ADK entropy, as a generalized of Re'nyi entropy, is considered and some properties of it, are investigated. We will see that the ADK entropy for continuous random variables is invariant under a location and is not invariant under a scale transformation of the random variable. Furthermore, the joint ADK entropy, conditional ADK entropy, and chain rule of this ent...

2001
Olivier Cappé Christian P. Robert Tobias Rydén

Reversible jump methods are the most commonly used Markov chain Monte Carlo tool for exploring variable dimension statistical models. Recently, however, an alternative approach based on birth-and-death processes has been proposed by Stephens for mixtures of distributions.We show that the birth-and-death setting can be generalized to include other types of continuous time jumps like split-and-co...

2008
Karl Sigman

A Markov chain in discrete time, {Xn : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic ...

2009
Karl Sigman

A Markov chain in discrete time, {Xn : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic ...

2006
Karl Sigman

A Markov chain in discrete time, {Xn : n ≥ 0}, remains in any state for exactly one unit of time before making a transition (change of state). We proceed now to relax this restriction by allowing a chain to spend a continuous amount of time in any state, but in such a way as to retain the Markov property. As motivation, suppose we consider the rat in the open maze. Clearly it is more realistic ...

This paper is intended to provide a numerical algorithm based on random sampling for solving the linear Volterra integral equations of the second kind. This method is a Monte Carlo (MC) method based on the simulation of a continuous Markov chain. To illustrate the usefulness of this technique we apply it to a test problem. Numerical results are performed in order to show the efficiency and accu...

2000
Christel Baier Boudewijn R. Haverkort Holger Hermanns Joost-Pieter Katoen

Markov-reward models, as extensions of continuous-time Markov chains, have received increased attention for the specification and evaluation of performance and dependability properties of systems. Until now, however, the specification of reward-based performance and dependability measures has been done manually and informally. In this paper, we change this undesirable situation by the introduct...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید