GACS: Generative Adversarial Imitation Learning Based on Control Sharing

نویسندگان

چکیده

Generative adversarial imitation learning (GAIL) directly imitates the behavior of experts from human demonstration instead designing explicit reward signals like reinforcement learning. Meanwhile, GAIL overcomes defects traditional by using a generative adversary network framework and shows excellent performance in many fields. However, acts on immediate rewards, feature that is reflected value function after period accumulation. Thus, when faced with complex practical problems, efficiency often extremely low policy may be slow to learn. One way solve this problem guide action (policy) agents' process, such as control sharing (CS) method. This paper combines proposes novel called based (GACS). GACS learns model constraints expert samples uses networks directly. The actions are produced used optimize effectively improve efficiency. Experiments autonomous driving environment real-time strategy game breakout show has better generalization capabilities, more efficient experts, can learn policies relative other frameworks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generative Adversarial Imitation Learning

Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert’s cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a...

متن کامل

Multimodal Storytelling via Generative Adversarial Imitation Learning

Deriving event storylines is an effective summarization method to succinctly organize extensive information, which can significantly alleviate the pain of information overload. The critical challenge is the lack of widely recognized definition of storyline metric. Prior studies have developed various approaches based on different assumptions about users’ interests. These works can extract inter...

متن کامل

Multi-agent Generative Adversarial Imitation Learning

We propose a new framework for multi-agent imitation learning for general Markov games, where we build upon a generalized notion of inverse reinforcement learning. We introduce a practical multi-agent actor-critic algorithm with good empirical performance. Our method can be used to imitate complex behaviors in highdimensional environments with multiple cooperative or competitive agents. 1 MARKO...

متن کامل

Learning a Visual State Representation for Generative Adversarial Imitation Learning

Imitation learning is a branch of reinforcement learning that aims to train an agent to imitate an expert’s behaviour, with no explicit reward signal or knowledge of the world. Generative Adversarial Imitation Learning (GAIL) is a recent model that performs this very well, in a data-efficient manner. However, it has only been used with low-level, low-dimensional state information, with few resu...

متن کامل

Model-based Adversarial Imitation Learning

Generative adversarial learning is a popular new approach to training generative models which has been proven successful for other related problems as well. The general idea is to maintain an oracle D that discriminates between the expert’s data distribution and that of the generative model G. The generative model is trained to capture the expert’s distribution by maximizing the probability of ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of systems science and information

سال: 2023

ISSN: ['1478-9906', '2512-6660']

DOI: https://doi.org/10.21078/jssi-2023-078-16