نتایج جستجو برای: markov decision process

تعداد نتایج: 1627273  

1999
Shalabh Bhatnagar Emmanuel Fernández-Gaucherand Michael C. Fu Ying He Steven I. Marcus

We present a finite-horizon Markov decision process (MDP) model for providing decision support in semiconductor manufacturing on such critical operational issues as when to add additional capacity and when to convert from one type of production to another.

Journal: :Annals OR 2011
Lars Relund Nielsen Erik Jørgensen Søren Højsgaard

In agriculture Markov decision processes (MDPs) with finite state and action space are often used to model sequential decision making over time. For instance, states in the process represent possible levels of traits of the animal and transition probabilities are based on biological models estimated from data collected from the animal or herd. State space models (SSMs) are a general tool for mo...

2009
Chiyoun Park Namhoon Kim Jeongmi Cho

Partially observable Markov decision process (POMDP) has been generally used to model agent decision processes such as dialogue management. In this paper, possibility of applying POMDP to a voice activity detector (VAD) has been explored. The proposed system first formulates hypotheses about the current noise environment and speech activity. Then, it decides and observes the features that are e...

Journal: :Decision Analysis 2010
Zeynep Erkin Matthew D. Bailey Lisa M. Maillart Andrew J. Schaefer Mark S. Roberts

E patient preferences over various health states is an important problem in health care decision modeling. Direct approaches, which involve asking patients various abstract questions, have significant drawbacks. We propose a new approach that infers patient preferences based on observed decisions via inverse optimization techniques. We illustrate our methods on the timing of a living-donor live...

2007
Cristian Danescu Herbert Jaeger

The process of understanding the meaning of a written passage inherently involves dynamic manipulation and composition of ideas. Starting from this observation this thesis proposes an artificial system for text understanding in which the semantic space containing the possible meanings of the analyzed text is selectively explored by a partially observable Markov decision process trained to effec...

Journal: :JIPS 2015
G. Ananthachari Preethi C. Chandrasekar

A mobile terminal will expect a number of handoffs within its call duration. In the event of a mobile call, when a mobile node moves from one cell to another, it should connect to another access point within its range. In case there is a lack of support of its own network, it must changeover to another base station. In the event of moving on to another network, quality of service parameters nee...

2013
Tony Tsang

Long Term Evolution (LTE) has been proposed as a promising radio access technology to bring higher peak data rates and better spectral efficiency. However, scheduling and resource allocation in LTE still face huge design challenges due to their complexity. In this paper, the optimization problem of scheduling and resource allocation for separate streams is first formulated. By separating stream...

2013
S. Kile F. S. Bakpo

The growth of e-commerce business especially in the business tocustomer and businessto -business relationships segments in recent years is an indication of how computer technology has improved activities in human lives. It is widely projected that ecommerce as a sector of business transaction is poised for spectacular growth. An immediate shortcoming of e-commerce is the risk involved in transa...

2014
Fernando L. Fussuma Karina Valdivia Delgado Leliane Nunes de Barros

Bounded-parameter Markov decision process (BMDP) can be used to model sequential decision problems, where the transitions probabilities are not completely know and are given by intervals. One of the criteria used to solve that kind of problems is the maximin, i.e., the best action on the worst scenario. The algorithms to solve BMDPs that use this approach include interval value iteration and an...

2004
Patrick Riley Manuela M. Veloso

An advising agent, a coach, provides advice to other agents about how to act. In this paper we contribute an advice generation method using observations of agents acting in an environment. Given an abstract state definition and partially specified abstract actions, the algorithm extracts a Markov Chain, infers a Markov Decision Process, and then solves the MDP (given an arbitrary reward signal)...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید