نتایج جستجو برای: markov decision process graph theory

تعداد نتایج: 2385831  

Journal: :global journal of environmental science and management 0
m. tajbakhsh department of watershed management, faculty of natural resources and environment, university of birjand, birjand, iran h. memarian department of watershed management, faculty of natural resources and environment, university of birjand, birjand, iran y. shahrokhi department of watershed management, faculty of natural resources and environment, university of birjand, birjand, iran

mashhad city, according to the latest official statistics of the country is the second populated city after tehran and is the biggest metropolis in the east of iran. considering the rapid growth of the population over the last three decades, the city’s development area has been extended, significantly. this significant expansion has impacted natural lands on suburb and even some parts e.g. rang...

Journal: :J. Applied Probability 2014
Søren Asmussen Bo Friis Nielsen

MarcelNeuts died in his home inTucson,Arizona, on 9March 2014. Hewas born inBelgium on 21 February 1935, and received his school and undergraduate education in Belgium before moving to Stanford University for his Masters (1958–1959) and PhD (1959–1960), supervised by Samuel Karlin. His main academic appointments were at Purdue University (1962–1976), University of Delaware (1976–1985), and The ...

2015
Nicolas Drougard Didier Dubois Jean-Loup Farges Florent Teichteil-Königsbuch

A new translation from Partially Observable MDP into Fully Observable MDP is described here. Unlike the classical translation, the resulting problem state space is finite, making MDP solvers able to solve this simplified version of the initial partially observable problem: this approach encodes agent beliefs with fuzzy measures over states, leading to an MDP whose state space is a finite set of...

2014
Ngo Anh Vien Marc Toussaint

We consider learning and planning in relational MDPs when object existence is uncertain and new objects may appear or disappear depending on previous actions or properties of other objects. Optimal policies actively need to discover objects to achieve a goal; planning in such domains in general amounts to a POMDP problem, where the belief is about the existence and properties of potential not-y...

1998
Stéphane Gaubert

Exotic semirings such as the “(max,+) semiring” (R ∪ {−∞},max,+), or the “tropical semiring” (N ∪ {+∞},min,+), have been invented and reinvented many times since the late fifties, in relation with various fields: performance evaluation of manufacturing systems and discrete event system theory; graph theory (path algebra) and Markov decision processes, Hamilton-Jacobi theory; asymptotic analysis...

1998
M. Denker R. D. Mauldin

We show that the set of conical points of a rational function of the Riemann sphere supports at most one conformal measure. We then study the problem of existence of such measures and their ergodic properties by constructing Markov partitions on increasing subsets of sets of conical points and by applying ideas of the thermodynamic formalism. 1 Introduction. In this paper we recall from U2] the...

2008
Hugo Gimbert Wieslaw Zielonka

We define and examine priority mean-payoff games — a natural extension of parity games. By adapting the notion of Blackwell optimality borrowed from the theory of Markov decision processes we show that priority mean-payoff games can be seen as a limit of special multi-discounted games.

Journal: :JORS 2016
Rob Shone Vincent A. Knight Paul R. Harper Janet E. Williams John Minty

We consider a Markovian queueing system with N heterogeneous service facilities, each of which has multiple servers available, linear holding costs, a fixed value of service and a first-come-first-serve queue discipline. Customers arriving in the system can be either rejected or sent to one of the N facilities. Two different types of control policies are considered, which we refer to as ‘selfis...

2017
Abra Brisbin

2006-2007 Discrete Probability Instructor: Abra Brisbin (CAM) The first few days introduced set theory and combinatorics. After that, they turned to probability distributions, expected value, and independence. In the second half of the course, they worked on conditional probability, Bayes’ Theorem, the Monte Carlo method, and Markov chains. Throughout the course, they tackled questions involvin...

2012
Koen De Turck Sofian De Clercq Sabine Wittevrongel Herwig Bruneel Dieter Fiems

Recent years have seen a considerable increase of attention devoted to Poisson’s equation for Markov chains, which now has attained a central place in Markov chain theory, due to the extensive list of areas where Poisson’s equation pops up: perturbation analysis, Markov decision processes, limit theorems of Markov chains, etc. all find natural expression when viewed from the vantage point of Po...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید