Depth Control of Model-Free AUVs via Reinforcement Learning
نویسندگان
چکیده
In this paper, we consider depth control problems of an autonomous underwater vehicle (AUV) for tracking the desired depth trajectories. Due to the unknown dynamical model of the AUV, the problems cannot be solved by most of modelbased controllers. To this purpose, we formulate the depth control problems of the AUV as continuous-state, continuous-action Markov decision processes (MDPs) under unknown transition probabilities. Based on deterministic policy gradient (DPG) and neural network approximation, we propose a model-free reinforcement learning (RL) algorithm that learns a state-feedback controller from sampled trajectories of the AUV. To improve the performance of the RL algorithm, we further propose a batch-learning scheme through replaying previous prioritized trajectories. We illustrate with simulations that our model-free method is even comparable to the model-based controllers as LQI and NMPC. Moreover, we validate the effectiveness of the proposed RL algorithm on a seafloor data set sampled from the South China Sea.
منابع مشابه
Reinforcement learning based feedback control of tumor growth by limiting maximum chemo-drug dose using fuzzy logic
In this paper, a model-free reinforcement learning-based controller is designed to extract a treatment protocol because the design of a model-based controller is complex due to the highly nonlinear dynamics of cancer. The Q-learning algorithm is used to develop an optimal controller for cancer chemotherapy drug dosing. In the Q-learning algorithm, each entry of the Q-table is updated using data...
متن کاملModel-Based Value Expansion for Efficient Model-Free Reinforcement Learning
Recent model-free reinforcement learning algorithms have proposed incorporating learned dynamics models as a source of additional data with the intention of reducing sample complexity. Such methods hold the promise of incorporating imagined data coupled with a notion of model uncertainty to accelerate the learning of continuous control tasks. Unfortunately, they rely on heuristics that limit us...
متن کاملMini/Micro-Grid Adaptive Voltage and Frequency Stability Enhancement Using Q-learning Mechanism
This paper develops an adaptive control method for controlling frequency and voltage of an islanded mini/micro grid (M/µG) using reinforcement learning method. Reinforcement learning (RL) is one of the branches of the machine learning, which is the main solution method of Markov decision process (MDPs). Among the several solution methods of RL, the Q-learning method is used for solving RL in th...
متن کاملConnecting rule-abstraction and model-based choice across disparate learning tasks
Recent research has identified key differences in the way individuals make decisions in predictive learning tasks, including the use of featureand rule-based strategies in causal learning and model-based versus model-free choices in reinforcement learning. These results suggest that people rely to varying degrees on separable psychological processes. However, the relationship between these type...
متن کاملCognitive Control Mode Predicts Behavioral Expression of Model-Based Reinforcement-Learning
A converging body of work suggests that cognitive control operates via two distinct operating modes – proactive control and reactive control, dissociable on a number of dimensions, such as computational properties, neural substrates, temporal dynamics, and consequences for information processing. At the same time, two forms of reinforcement learning (RL), called Model-Based and Model-Free RL, w...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1711.08224 شماره
صفحات -
تاریخ انتشار 2017