Learning multi-agent coordination through connectivity-driven communication

نویسندگان

چکیده

Abstract In artificial multi-agent systems, the ability to learn collaborative policies is predicated upon agents’ communication skills: they must be able encode information received from environment and how share it with other agents as required by task at hand. We present a deep reinforcement learning approach, Connectivity Driven Communication (CDC), that facilitates emergence of behaviour only through experience. The are modelled nodes weighted graph whose state-dependent edges pair-wise messages can exchanged. introduce graph-dependent attention mechanisms controls incoming weighted. This mechanism takes into full account current state system represented graph, builds diffusion process captures flows on graph. topology not assumed known priori, but depends dynamically observations, learnt concurrently policy in an end-to-end fashion. Our empirical results show CDC effective over-perform competing algorithms cooperative navigation tasks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Decentralized Anti-coordination Through Multi-agent Learning

To achieve an optimal outcome in many situations, agents need to choose distinct actions from one another. This is the case notably in many resource allocation problems, where a single resource can only be used by one agent at a time. How shall a designer of a multi-agent system program its identical agents to behave each in a different way? From a game theoretic perspective, such situations le...

متن کامل

Multi-Agent Coordination by Communication of Evaluations

A framework for coordination in multi-agent systems is introduced. The main idea of our framework is that an agent with knowledge about the desired behavior in a certain domain will direct other, domain-independent agents by means of signals which reeect its evaluation of the coordination between its own actions and their actions. Mechanisms for coordination are required to enable construction ...

متن کامل

Deriving Multi-Agent Coordination through Filtering Strategies

We examine an approach to multi-agent coordination that builds on earlier work on enabling single agents to control their reasoning in dynamic environments. Specifically, we study a generalization of the filtering strategy. Where single-agent filtering means tending to bypass options that are incompatible with an agent's own goals, multi-agent filtering means tending to bypass options that are ...

متن کامل

Multi-Agent Coordination through Coalition Formation

Incorporating coalition formation algorithms into agent systems shall be advantageous due to the consequent increase in the overall quality of task performance. Coalition formation was addressed in game theory, however the game theoretic approach is centralized and computationally intractable. Recent work in DAI has resulted in distributed algorithms with computational tractability. This paper ...

متن کامل

Transfer Learning for Multi-agent Coordination

Transfer learning leverages an agent’s experience in a source task in order to improve its performance in a related target task. Recently, this technique has received attention in reinforcement learning settings. Training a reinforcement learning agent on a suitable source task allows the agent to reuse this experience to significantly improve performance on more complex target problems. Curren...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine Learning

سال: 2022

ISSN: ['0885-6125', '1573-0565']

DOI: https://doi.org/10.1007/s10994-022-06286-6