نتایج جستجو برای: markov decision process graph theory
تعداد نتایج: 2385831 فیلتر نتایج به سال:
In this paper we compare ruin functions for two risk processes with respect to stochastic ordering, stop-loss ordering and ordering of adjustment coeecients. The risk processes are as follows: in the Markov-modulated environment and the associated averaged compound Poisson model. In the latter case the arrival rate is obtained by averaging over time the arrival rate in the Markov modulated mode...
Cognitive assistive technologies that aid people with dementia (such as Alzheimer’s disease) hold the promise to provide such people with an increased level of independence. However, to realize this promise, such systems must account for the specific needs and preferences of individuals. We argue that this form of customization requires a sequential, decision-theoretic model of interaction. We ...
We present a nonparametric prior over reversible Markov chains. We use completely random measures, specifically gamma processes, to construct a countably infinite graph with weighted edges. By enforcing symmetry to make the edges undirected we define a prior over random walks on graphs that results in a reversible Markov chain. The resulting prior over infinite transition matrices is closely re...
This paper describes an analytical study of open two-node (tandem) network models with blocking and truncation. The study is based on semi-Markov process theory, and network models assume that multiple servers serve each queue. Tasks arrive at the tandem in a Poisson fashion at the rate λ, and the service times at the first and the second node are nonexponentially distributed with means s and s...
Simple Stochastic Game nondeterminism nondeterminism probability probability nondeterminism nondeterminism probability 1 Definitions Markov Chain (MC) Definition 1. A Markov Chain ((S, E), δ) is a graph (S, E) with a function δ : S → D(S) that maps every state to a probability distribution of successor states. There is an edge between two states s, t in S iff the probability of going from s to ...
This paper presents a system designed for task allocation, staff management and decision support in a large enterprise, in which permanent staff and contractors work alongside under the overall management of a manager to handle tasks initiated by end-users. The process of allocating a new task to a worker is modeled under different situations, taking into account user requirements as well as th...
We present a method of statistical dialogue management using a directed intention dependency graph (IDG) in a partially observable Markov decision process (POMDP) framework. The transition probabilities in this model involve information derived from a hierarchical graph of intentions. In this way, we combine the deterministic graph structure of a conventional rule-based system with a statistica...
•Common choice is Metropolis-Hastings: W M H i j = { 1/(2dmax) if {i , j } ∈ E 1−di /(2dmax) if i = j 0 otherwise •Rate of convergence is controlled by ρ(W −11T /n). • min{ρ(W −11T /n) : W symmetrical} is a convex problem (SDP). •Optimal matrix yields slow rate O(D2), achieved by W M H . •Lower-bound: Ω(D), where D is graph diameter. •To get fast rates, two approaches have been developed indepe...
We propose a Markov Decision Process Model that blends ideas from Psychological research and Economics to study decision-making in individuals with self-control problems. have borrowed dual-process of self-awareness research, we introduce present bias inter-temporal preferences, phenomenon widely explored Economics. allow for both an exogenous endogenous, state-dependent, explore, by means nume...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید