Optimal Direct Policy Search
نویسندگان
چکیده
Hutter’s optimal universal but incomputable AIXI agent models the environment as an initially unknown probability distributioncomputing program. Once the latter is found through (incomputable) exhaustive search, classical planning yields an optimal policy. Here we reverse the roles of agent and environment by assuming a computable optimal policy realizable as a program mapping histories to actions. This assumption is powerful for two reasons: (1) The environment need not be probabilistically computable, which allows for dealing with truly stochastic environments, (2) All candidate policies are computable. In stochastic settings, our novel method Optimal Direct Policy Search (ODPS) identifies the best policy by direct universal search in the space of all possible computable policies. Unlike AIXI, it is computable, model-free, and does not require planning. We show that ODPS is optimal in the sense that its reward converges to the reward of the optimal policy in a very broad class of partially observable stochastic environments.
منابع مشابه
Model - based Direct Policy Search ( Extended Abstract ) Jan
Scaling Reinforcement Learning (RL) to real-world problems with continuous state and action spaces remains a challenge. This is partly due to the reason that the optimal value function can become quite complex in continuous domains. In this paper, we propose to avoid learning the optimal value function at all but to use direct policy search methods in combination with model-based RL instead.
متن کاملTowards fast and adaptive optimal control policies for robots: A direct policy search approach
Optimal control methods are generally too expensive to be applied on-line and in real-time to the control of robots. An alternative method consists in tuning a parametrized reactive controller so that it converges to optimal behavior. In this paper we present such a method based on the “direct Policy Search” paradigm to get a cost-efficient control policy for a simulated two degrees-of-freedom ...
متن کاملOn the Performance Bounds of some Policy Search Dynamic Programming Algorithms
We consider the infinite-horizon discounted optimal control problem formalized by Markov Decision Processes. We focus on Policy Search algorithms, that compute an approximately optimal policy by following the standard Policy Iteration (PI) scheme via an -approximate greedy operator (Kakade and Langford, 2002; Lazaric et al., 2010). We describe existing and a few new performance bounds for Direc...
متن کاملIntegrated JIT Lot-Splitting Model with Setup Time Reduction for Different Delivery Policy using PSO Algorithm
This article develops an integrated JIT lot-splitting model for a single supplier and a single buyer. In this model we consider reduction of setup time, and the optimal lot size are obtained due to reduced setup time in the context of joint optimization for both buyer and supplier, under deterministic condition with a single product. Two cases are discussed: Single Delivery (SD) case, and Multi...
متن کاملImportance sampling-based approximate optimal planning and control
In this paper, we propose a sampling-based planning and optimal control method of nonlinear systems under non-differentiable constraints. Motivated by developing scalable planning algorithms, we consider the optimal motion plan to be a feedback controller that can be approximated by a weighted sum of given bases. Given this approximate optimal control formulation, our main contribution is to in...
متن کامل