نتایج جستجو برای: partially observable markov decision process
تعداد نتایج: 1776231 فیلتر نتایج به سال:
This paper presents the results of a comparative user evaluation of various approaches to dialogue management. The major contribution is a comparison of traditional systems against a system that uses a Bayesian Update of Dialogue State approach. This approach is based on the Partially Observable Markov Decision Process (POMDP), which has previously been shown to give improved robustness in simu...
Spoken dialogue managers have benefited from using stochastic planners such as Markov Decision Processes (MDPs). However, so far, MDPs do not handle well noisy and ambiguous speech utterances. We use a Partially Observable Markov Decision Process (POMDP)-style approach to generate dialogue strategies by inverting the notion of dialogue state; the state represents the user’s intentions, rather t...
Good pedagogical actions are key components in all learning-teaching schemes. Automate that is an important Intelligent Tutoring Systems objective. We propose apply Partially Observable Markov Decision Process (POMDP) in order to obtain automatic and optimal pedagogical recommended action patterns in benefit of human students, in the context of Intelligent Tutoring System. To achieve that goal,...
T U.S. pharmaceutical industry spent upwards of $18 billion on marketing drugs in 2005; detailing and drug sampling activities accounted for the bulk of this spending. To stay competitive, pharmaceutical managers need to maximize the return on these marketing investments by determining which physicians to target as well as when and how to target them. In this paper, we present a two-stage appro...
This paper examines approaches to representing uncertainty in reputation systems for electronic markets with the aim of constructing a decision theoretic framework for collecting information about selling agents and making purchase decisions in the context of a social reputation system. A selection of approaches to representing reputation using Dempster-Shafter Theory and Bayesian probability a...
Noisy sensing, imperfect control, and environment changes are defining characteristics of many real-world robot tasks. The partially observable Markov decision process (POMDP) provides a principled mathematical framework for modeling solving control tasks under uncertainty. Over the last decade, it has seen successful applications, spanning localization navigation, search tracking, autonomous d...
Changes in species population size, habitat quality, presence or absence of threats, and environment climate are attributes that make ecological systems dynamic (Brown et al., 2001). In conservation applied ecology, making an informed decision to manage a system would ideally be based on perfect knowledge the state system, as one management action rarely fits all situations. State-transition mo...
In this paper, we describe the general approach of trying to solve Partially Observable Markov Decision Processes with approximate value iteration. Methods based on this approach have shown promise for tackling larger problems where exact methods are doomed, but we explain how most of them suffer from the fundamental problem of ignoring information about the uncertainty of their estimates. We t...
We study the problem of finding an optimal policy for a Partially Observable Markov Decision Process (POMDP) when the model is not perfectly known and may change over time. We present the algorithm MEDUSA+, which incrementally improves a POMDP model using selected queries, while still optimizing the reward. Empirical results show the response of the algorithm to changes in the parameters of a m...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید