Hierarchical Functional Concepts for Knowledge Transfer among Reinforcement Learning Agents

Authors

  • A. Mousavi Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
  • B. N. Araabi Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran and School of Cognitive Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
  • H. Vosoughpour Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
  • M. Nili Ahmadabadi Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran and School of Cognitive Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
  • N. Zaare Control and Intelligent Processing Center of Excellence, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
Abstract:

This article introduces the notions of functional space and concept as a way of knowledge representation and abstraction for Reinforcement Learning agents. These definitions are used as a tool of knowledge transfer among agents. The agents are assumed to be heterogeneous; they have different state spaces but share a same dynamic, reward and action space. In other words, the agents are assumed to have different representations of an environment while having similar actions. The learning framework is $Q$-learning. Each dimension of the functional space is the normalized expected value of an action. An unsupervisedclustering approach is used to form the functional concepts as some fuzzy areas in the functional space. The functional concepts are abstracted further in a hierarchy using the clustering approach. The hierarchical concepts are employed for knowledge transfer among agents. Properties of the proposed approach are tested in a set of case studies. The results show that the approach is very effective in transfer learning among heterogeneous agents especially in the beginning episodes of the learning.

Upgrade to premium to download articles

Sign up to access the full text

Already have an account?login

similar resources

hierarchical functional concepts for knowledge transfer among reinforcement learning agents

this article introduces the notions of functional space and concept as a way of knowledge representation and abstraction for reinforcement learning agents. these definitions are used as a tool of knowledge transfer among agents. the agents are assumed to be heterogeneous; they have different state spaces but share a same dynamic, reward and action space. in other words, the agents are assumed t...

full text

Knowledge Transfer for Deep Reinforcement Learning with Hierarchical Experience Replay

The process for transferring knowledge of multiple reinforcement learning policies into a single multi-task policy via distillation technique is known as policy distillation. When policy distillation is under a deep reinforcement learning setting, due to the giant parameter size and the huge state space for each task domain, it requires extensive computational efforts to train the multi-task po...

full text

Grounding Hierarchical Reinforcement Learning Models for Knowledge Transfer

Methods of deep machine learning enable to to reuse low-level representations efficiently for generating more abstract high-level representations. Originally, deep learning has been applied passively (e.g., for classification purposes). Recently, it has been extended to estimate the value of actions for autonomous agents within the framework of reinforcement learning (RL). Explicit models of th...

full text

Hierarchical Reinforcement Learning for Communicating Agents

This paper proposes hierarchical reinforcement learning (RL) methods for communication in multiagent coordination problems modelled as Markov Decision Processes (MDPs). To bridge the gap between the MDP view and the methods used to specify communication protocols in multiagent systems (using logical conditions and propositional message structure), we utilise interaction frames as powerful polic...

full text

A Modular Approach to Knowledge Transfer Between Reinforcement Learning Agents

Reinforcement learning is a general approach to learning reactive control policies. It is an unsupervised learning technique, making it a candidate for use in system that adapts to changing tasks and environment by autonomously devising a new strategy. Unfortunately, reinforcement learning methods are slow to converge to a solution, rendering them impractical in most cases. The key shortcoming ...

full text

My Resources

Save resource for easier access later

Save to my library Already added to my library

{@ msg_add @}


Journal title

volume 12  issue 5

pages  99- 116

publication date 2015-10-30

By following a journal you will be notified via email when a new issue of this journal is published.

Hosted on Doprax cloud platform doprax.com

copyright © 2015-2023