On discrete-time semi-Markov processes
نویسندگان
چکیده
In the last years, several authors studied a class of continuous-time semi-Markov processes obtained by time-changing Markov hitting times independent subordinators. Such are governed integro-differential convolution equations generalized fractional type. The aim this paper is to develop discrete-time counterpart such theory and show relationships differences with respect continuous time case. We present chains which can be constructed as time-changed we obtain related governing type equations. converge weakly those in under suitable scaling limits.
منابع مشابه
Discrete Time Markov Processes
What follows is a quick survey of the main ingredients in the theory of discrete-time Markov processes. It is a birds' view, rather than the deenitive \state of the art." To maximize accessibility, the nomenclature of mathematical probability is avoided, although rigor is not sacriiced. To compensate, examples (and counterexamples) abound and the bibliography is annotated. Relevance to control ...
متن کاملDiscrete Time Scale Invariant Markov Processes
In this paper we consider a discrete scale invariant Markov process {X(t), t ∈ R} with scale l > 1. We consider to have some fix number of observations in every scale, say T , and to get our samples at discrete points α, k ∈ W, where α is obtained by the equality l = α and W = {0, 1, . . .}. So we provide a discrete time scale invariant Markov (DT-SIM) process X(·) with parameter space {α, k ∈ ...
متن کاملWeek 1 Discrete time Gaussian Markov processes
These are lecture notes for the class Stochastic Calculus offered at the Courant Institute in the Fall Semester of 2012. It is a graduate level class. Students should have a solid background in probability and linear algebra. The topic selection is guided in part by the needs of our MS program in Mathematics in Finance. But it is not focused entirely on the Black Scholes theory of derivative pr...
متن کاملSemi-markov Decision Processes
Considered are infinite horizon semi-Markov decision processes (SMDPs) with finite state and action spaces. Total expected discounted reward and long-run average expected reward optimality criteria are reviewed. Solution methodology for each criterion is given, constraints and variance sensitivity are also discussed.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Discrete and Continuous Dynamical Systems-series B
سال: 2021
ISSN: ['1531-3492', '1553-524X']
DOI: https://doi.org/10.3934/dcdsb.2020170