Towards a unified multi-agent reinforcement learning framework

Publication Type:
Thesis
Issue Date:
2024
Full metadata record
The field of Multi-Agent Reinforcement Learning (MARL) has rapidly evolved, yet integrating diverse tasks and algorithms into a cohesive system remains a complex challenge. This thesis proposes a unified framework aimed at improving adaptability, scalability, and cooperative dynamics among agents across various tasks and environments. Our research is structured around three primary advancements: the development of a flexible architecture, the analytical quantification of agent roles, and the creation of an integrative library for MARL. Initially, we introduce a novel architectural model capable of accommodating varying task configurations through a transformer-based approach that separates policy decisions from input observations. This model enhances transfer capabilities and accelerates training processes, showcasing substantial improvements in diverse MARL applications. Following this, we delve into the concept of Role Diversity, which quantifies and utilizes behavioral differences among agents to optimize policy performance. This analysis highlights how understanding these diversities can influence and improve key MARL strategies such as parameter sharing, communication mechanisms, and credit assignment, thereby enhancing overall system efficiency and adaptability. Finally, we develop a comprehensive MARL library that standardizes environment and algorithm integration, facilitating the flexible mapping of policies and streamlined development of multi-agent systems. This tool effectively simplifies the complexity of deploying diverse learning algorithms and managing multiple tasks, promoting a more systematic approach to MARL. Together, these innovations contribute significantly to the MARL field, offering novel insights and methodologies that advance the unified and efficient implementation of MARL.
Please use this identifier to cite or link to this item: