Should I trust my teammates? An experiment in Heuristic Multiagent Reinforcement Learning
نویسندگان
چکیده
Trust and reputation are concepts that have been traditionally studied in domains such as electronic markets, e-commerce, game theory and bibliometrics, among others. More recently, researchers started to investigate the benefits of using these concepts in multi-robot domains: when one robot has to decide if it should cooperate with another one to accomplish a task, should the trust in the other be taken into account? This paper proposes the use of a trust model to define when one agent can take an action that depends on other agents of his team. To implement this idea, a Heuristic Multiagent Reinforcement Learning algorithm is modified to take into account the trust in the other agents, before selecting an action that depends on them. Simulations were made in a robot soccer domain, which extends a very well known one proposed by Littman by expanding its size, the number of agents and by using heterogeneous agents. Based on the results it is possible to show the performance of a team of agents can be improved even when using very simple trust models.
منابع مشابه
Samuel Barrett's Research Statement
My research focuses on investigating how robots and other agents should learn and cooperate in order to tackle realworld problems. Agents are entities that repeatedly interact with their environment in order to accomplish their goals. In order for robots and other agents to handle many real-world problems, they must be able to cooperate with other agents and humans. However, they may not always...
متن کاملAdapting Plans through Communication with Unknown Teammates: (Doctoral Consortium)
Coordinating a team of autonomous agents is a challenging problem. Agents must act in such a way that makes progress toward the achievement of a goal while avoiding conflict with their teammates. In information asymmetric domains, it is often necessary to share crucial observations in order to collaborate effectively. In traditional multiagent systems literature, these teams of agents share an ...
متن کاملLenience towards Teammates Helps in Cooperative Multiagent Learning
Concurrent learning is a form of cooperative multiagent learning in which each agent has an independent learning process and little or no control over its teammates’ actions. In such learning algorithms, an agent’s perception of the joint search space depends on the reward received by both agents, which in turn depends on the actions currently chosen by the other agents. The agents will tend to...
متن کاملA Multiagent Reinforcement Learning algorithm to solve the Community Detection Problem
Community detection is a challenging optimization problem that consists of searching for communities that belong to a network under the assumption that the nodes of the same community share properties that enable the detection of new characteristics or functional relationships in the network. Although there are many algorithms developed for community detection, most of them are unsuitable when ...
متن کاملMultiagent meta-level control for radar coordination
It is crucial for embedded systems to adapt to the dynamics of open environments. This adaptation process becomes especially challenging in the context of multiagent systems. In this paper, we argue that multiagent meta-level control is an effective way to determine when this adaptation process should be done and how much effort should be invested in adaptation as opposed to continuing with the...
متن کامل