A Q-learning Based Continuous Tuning of Fuzzy Wall Tracking
Authors
Abstract:
A simple easy to implement algorithm is proposed to address wall tracking task of an autonomous robot. The robot should navigate in unknown environments, find the nearest wall, and track it solely based on locally sensed data. The proposed method benefits from coupling fuzzy logic and Q-learning to meet requirements of autonomous navigations. Fuzzy if-then rules provide a reliable decision making framework to handle uncertainties, and also allow incorporation of heuristic knowledge. Dynamic structure of Q-learning makes it a promising tool to tune fuzzy inference systems when little or no prior knowledge is available about the world. To robot, the world is modeled into a set of state-action pairs. For each fuzzified state, there are some suggested actions. States are related to their corresponding actions via fuzzy if-then rules based on human reasoning. The robot selects the most encouraged action for each state through online experiences. Experiments on simulated Khepera robot validate efficiency of the proposed method. Simulation results demonstrate a successful implementation of wall tracking task where the robot keeps itself within predefined margins from walls even with complex concave, convex, or polygon shapes.
similar resources
Solving Continuous Action/State Problem in Q-Learning Using Extended Rule Based Fuzzy Inference Systems
Q-learning is a kind of reinforcement learning where the agent solves the given task based on rewards received from the environment. Most research done in the field of Q-learning has focused on discrete domains, although the environment with which the agent must interact is generally continuous. Thus we need to devise some methods that enable Q-learning to be applicable to the continuous proble...
full textContinuous Deep Q-Learning with Model-based Acceleration: Appendix
The iLQG algorithm optimizes trajectories by iteratively constructing locally optimal linear feedback controllers under a local linearization of the dynamics p(xt+1|xt,ut) = N (fxtxt + futut,Ft) and a quadratic expansion of the rewards r(xt,ut) (Tassa et al., 2012). Under linear dynamics and quadratic rewards, the action-value function Q(xt,ut) and value function V (xt) are locally quadratic an...
full textContinuous Deep Q-Learning with Model-based Acceleration
Model-free reinforcement learning has been successfully applied to a range of challenging problems, and has recently been extended to handle large neural network policies and value functions. However, the sample complexity of modelfree algorithms, particularly when using highdimensional function approximators, tends to limit their applicability to physical systems. In this paper, we explore alg...
full textQ Memory based active learning for optimizing noisy continuous functions
This paper introduces a new algorithm Q for optimizing the expected output of a multi input noisy continuous function Q is de signed to need only a few experiments it avoids strong assumptions on the form of the function and it is autonomous in that it re quires little problem speci c tweaking These capabilities are directly applicable to industrial processes and may become in creasingly valuab...
full textIncremental-Topological-Preserving-Map-Based Fuzzy Q-Learning (ITPM-FQL)
Reinforcement Learning (RL) is thought to be an appropriate paradigm to acquire policies for autonomous learning agents that work without initial knowledge because RL evaluates learning from simple “evaluative” or “critic” information instead of “instructive” information used in Supervised Learning. There are two well-known types of RL, namely Actor-Critic Learning and Q-Leaning. Among them, Q-...
full textEfficient Implementation of Dynamic Fuzzy Q-Learning
This paper presents a Dynamic Fuzzy Q-Learning (DFQL) method that is capable of tuning the Fuzzy Inference Systems (FIS) online. On-line self-organizing learning is developed so that structure and parameters identification are accomplished automatically and simultaneously. Selforganizing fuzzy inference is introduced to calculate actions and Q-functions so as to enable us to deal with continuou...
full textMy Resources
Journal title
volume 25 issue 4
pages 355- 366
publication date 2012-10-01
By following a journal you will be notified via email when a new issue of this journal is published.
Hosted on Doprax cloud platform doprax.com
copyright © 2015-2023