نتایج جستجو برای: action planning
تعداد نتایج: 795866 فیلتر نتایج به سال:
This section formalizes the encoding of an agent’s environment and goal into a Markov decision problem (MDP), and describes how this MDP can be solved efficiently by algorithms for rational planning. Let π be an agent’s plan, referred to here (and in the MDP literature) as a policy, such that Pπ(at|st, g, w) is a probability distribution over actions at at time t, given the agent’s state st at ...
An intelligent agent must integrate three central components of behavior planning, action, and perception. Different researchers have explored alternative strategies for interleaving these processes, typically assuming that their approach is desirable in all domains. In contrast, we believe that different domains require different interleaving schemes. In this paper we identify three continua a...
Strategic planning is a systematic method focuses on inter connections of preferred action by using technical indicator including weaknesses and strengths (abilities and resources), opportunities and threats in analytical process. In addition it is a systematic method for decision making. These differences happened comparing with other planning method because of its intelligent integrated analy...
We introduce a simple variation of the additive heuristic used in the HSP planner that combines the benefits of the original additive heuristic, namely its mathematical formulation and its ability to handle non-uniform action costs, with the benefits of the relaxed planning graph heuristic used in FF, namely its compatibility with the highly effective enforced hill climbing search along with it...
We describe a representation in a high-level transition system for policies that express a reactive behavior for the agent. We consider a target decision component that figures out what to do next and an (online) planning capability to compute the plans needed to reach these targets. Our representation allows one to analyze the flow of executing the given reactive policy, and to determine wheth...
Domain theories are used in a wide variety of fields of computer science as a means of representing properties of the domain under consideration. These fields include artificial intelligence, software engineering, VLSI design, cryptography, and distributed computing. In each ease, the advantages of using theories include the precision of task specification and the ability to verify results. A g...
More recently, planning based on answer set programming has been proposed as an approach towards realizing declarative planning systems. In this paper, we present the language K , which extends the declarative planning language K by action costs and provides the notion of admissible and optimal plans, which are plans whose overall action costs are within a given limit resp. minimum over all pla...
A theory of action suitable for reasoning about events in multiagent or dynamically changing environments is prescntcrl. A device called a process model is used to represent the observable behavior of an agent in performing an action. This model is more general than previous models of act ion, allowing sequencing, selection, nondeterminism, iteration, and parallelism to be represented. It is sh...
Robotic manipulation is important for real, physical world applications. General Purpose manipulation with a robot (eg. delivering dishes, opening doors with a key, etc.) is demanding. It is hard because (1) objects are constrained in position and orientation, (2) many non-spatial constraints interact (or interfere) with each other, and (3) robots may have multidegree of freedoms (DOF). In this...
Learning consists in the acquisition of knowledge. In Reinforcement Learning this is knowledge about how to reach a maximum of environmental reward. We are interested in the acquisition of knowledge that consists in having expectations of behavioral consequences. Behavioral consequences depend on the current situation, so it is necessary to learn in which situation S which behavior/reaction R l...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید