نتایج جستجو برای: distributed reinforcement learning

تعداد نتایج: 868955  

2009
Thomas Gabel

Decentralized decision-making has become an active research topic in artificial intelligence. In a distributed system, a number of individually acting agents coexist. If they strive to accomplish a common goal, i.e. if the multi-agent system is a cooperative one, then the establishment of coordinated cooperation between the agents is of utmost importance. With this in mind, our focus is on mult...

Journal: :IEEE Internet of Things Journal 2023

Vehicular edge computing has emerged as a promising paradigm by offloading computation-intensive latency-sensitive tasks to mobile-edge (MEC) servers. However, it is difficult provide users with excellent Quality-of-Service (QoS) relying only on these server resources. Therefore, in this article, we propose formulate the computation policy based deep reinforcement learning (DRL) vehicle-assiste...

Journal: :IEEE ACM Transactions on Networking 2023

The Network Slicing (NS) paradigm enables the partition of physical and virtual resources among multiple logical networks, possibly managed by different tenants. In such a scenario, network need to be dynamically allocated according slice requirements. this paper, we attack above problem exploiting Deep Reinforcement Learning approach. Our framework is based on distributed architecture, where a...

1999
Jeff G. Schneider Weng-Keen Wong Andrew W. Moore Martin A. Riedmiller

Many interesting problems, such as power grids, network switches, and tra c ow, that are candidates for solving with reinforcement learning (RL), also have properties that make distributed solutions desirable. We propose an algorithm for distributed reinforcement learning based on distributing the representation of the value function across nodes. Each node in the system only has the ability to...

2014
Vivek S. Borkar Adwaitvedant S. Mathkar

Reinforcement learning has gained wide popularity as a technique for simulation-driven approximate dynamic programming. A less known aspect is that the very reasons that make it effective in dynamic programming can also be leveraged for using it for distributed schemes for certain matrix computations involving non-negative matrices. In this spirit, we propose a reinforcement learning algorithm ...

Journal: :CoRR 2015
Arun Nair Praveen Srinivasan Sam Blackwell Cagdas Alcicek Rory Fearon Alessandro De Maria Vedavyas Panneershelvam Mustafa Suleyman Charles Beattie Stig Petersen Shane Legg Volodymyr Mnih Koray Kavukcuoglu David Silver

We present the first massively distributed architecture for deep reinforcement learning. This architecture uses four main components: parallel actors that generate new behaviour; parallel learners that are trained from stored experience; a distributed neural network to represent the value function or behaviour policy; and a distributed store of experience. We used our architecture to implement ...

Journal: :international journal of advanced biological and biomedical research 2014
ahmad ghanbari yasaman vaghei sayyed mohammad reza sayyed noorani

in recent years, researches on reinforcement learning (rl) have focused on bridging the gap between adaptive optimal control and bio-inspired learning techniques. neural network reinforcement learning (nnrl) is among the most popular algorithms in the rl framework. the advantage of using neural networks enables the rl to search for optimal policies more efficiently in several real-life applicat...

Journal: :iranian journal of psychiatry and behavioral sciences 0
ali akbar rahmatian webster university, lakeland, florida usa

objective: the purpose of this study was to identify reasons domestic violence occurs within intimate relationships. methods: the target group was female victims and male offenders. the offenders group consisted of 25 men from a batterer’s intervention group. the victims group composed of 9 women from center against spouse abuse (casa) intervention group. results: domestic violence occurred at ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید