Multi-Agent Deep Reinforcement Learning-Based Interdependent Computing for Mobile Edge Computing-Assisted Robot Teams

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Transactions on Vehicular Technology, 2022, PP, (99), pp. 1-12
Issue Date:
2022-01-01
Full metadata record
A group of robots can be assigned with different roles to collaboratively conduct interdependent tasks. The robots form a multi-robot system (MRS), where one robot's decision or action relies on the others'. This paper addresses the sequential decision problem of user association and resource allocation in a mobile edge computing (MEC)-enabled, wirelessly-connected MRS to maximize the time-averaged completion rate of interdependent computing tasks. The problem is challenging due to the partial observability of the network environment, and the delicate delay requirements of interdependent computing tasks. A new decentralized partially observable Markov decision process (Dec-POMDP) problem is reformulated, where edge servers act as intelligent agents and can make decentralized decisions about user association and resource management with their local information of the network state. By leveraging the multi-agent deep deterministic policy gradient (MADDPG) theory, a new cooperative multi-agent deep reinforcement learning (MADRL) model is developed to enable interdependent computing. Simulations show the merits of our approach in terms of task completion rate compared to existing techniques.
Please use this identifier to cite or link to this item: