نتایج جستجو برای: partially observable markov decision process

تعداد نتایج: 1776231  

2009
Chiyoun Park Namhoon Kim Jeongmi Cho

Partially observable Markov decision process (POMDP) has been generally used to model agent decision processes such as dialogue management. In this paper, possibility of applying POMDP to a voice activity detector (VAD) has been explored. The proposed system first formulates hypotheses about the current noise environment and speech activity. Then, it decides and observes the features that are e...

2007
Francisco S. Melo Isabel Ribeiro

In this paper we discuss how communication can be used advantageously for cooperative navigation in sparse environments. Specifically, we analyze the tradeoff between the cost of communication cost and the efficient completion of the navigation task. We make use of a partially observable Markov decision process (POMDP) to model the navigation task, since this model allows to explicitly consider...

Journal: :Frontiers in Computational Neuroscience 2008
Eric A. Zilli Michael E. Hasselmo

Behavioral tasks are often used to study the different memory systems present in humans and animals. Such tasks are usually designed to isolate and measure some aspect of a single memory system. However, it is not necessarily clear that any given task actually does isolate a system or that the strategy used by a subject in the experiment is the one desired by the experimenter. We have previousl...

2011
Monica Dinculescu Christopher Hundt Prakash Panangaden Joelle Pineau Doina Precup

In this paper we consider the problem of representing and reasoning about systems, especially probabilistic systems, with hidden state. We consider transition systems where the state is not completely visible to an outside observer. Instead, there are observables that partly identify the state. We show that one can interchange the notions of state and observation and obtain what we call a dual ...

Journal: :CoRR 2012
Mark P. Woodward Robert J. Wood

As general purpose robots become more capable, pre-programming of all tasks at the factory will become less practical. We would like for non-technical human owners to be able to communicate, through interaction with their robot, the details of a new task; we call this interaction “task communication”. During task communication the robot must infer the details of the task from unstructured human...

2001
Rong Zhou Eric A. Hansen

Although a partially observable Markov decision process (POMDP) provides an appealing model for problems of planning under uncertainty, exact algorithms for POMDPs are intractable. This motivates work on approximation algorithms, and grid-based approximation is a widely-used approach. We describe a novel approach to grid-based approximation that uses a variable-resolution regular grid, and show...

2003
Anthony R. Cassandra

An increasing number of researchers in many areas are becoming interested in the application of the partially observable Markov decision process (pomdp) model to problems with hidden state. This model can account for both state transition and observation uncertainty. The majority of recent research interest in the pomdp model has been in the artificial intelligence community and as such, has be...

2010
Augusto Cesar Espíndola Baffa Angelo E. M. Ciarlini

The stock market can be considered a nondeterministic and partially observable domain, because investors never know all information that affects prices and the result of an investment is always uncertain. Technical Analysis methods demand only data that are easily available, i.e. the series of prices and trade volumes, and are then very useful to predict current price trends. Analysts have howe...

2013
Jilles Steeve Dibangoye Christopher Amato Arnaud Doniec François Charpillet

There has been substantial progress on algorithms for single-agent sequential decision making using partially observable Markov decision processes (POMDPs). A number of efficient algorithms for solving POMDPs share two desirable properties: error-bounds and fast convergence rates. Despite significant efforts, no algorithms for solving decentralized POMDPs benefit from these properties, leading ...

2004
Eric A. Hansen Daniel S. Bernstein Shlomo Zilberstein

We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable Markov decision processes (POMDPs) and iterative elimination of dominated strategies in normal form games. We prove that it iteratively eliminates very weakly dominated strategies without first forming the normal form r...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید