Fully-Decentralized MADDPG with Networked Agents
Diego Bolliger, Lorenz Zauter, Robert Ziegler
2025-03-09Multi-agent Reinforcement Learning
Abstract
In this paper, we devise three actor-critic algorithms with decentralized training for multi-agent reinforcement learning in cooperative, adversarial, and mixed settings with continuous action spaces. To this goal, we adapt the MADDPG algorithm by applying a networked communication approach between agents. We introduce surrogate policies in order to decentralize the training while allowing for local communication during training. The decentralized algorithms achieve comparable results to the original MADDPG in empirical tests, while reducing computational cost. This is more pronounced with larger numbers of agents.
Related Papers
One Step is Enough: Multi-Agent Reinforcement Learning based on One-Step Policy Optimization for Order Dispatch on Ride-Sharing Platforms2025-07-21A Learning Framework For Cooperative Collision Avoidance of UAV Swarms Leveraging Domain Knowledge2025-07-15Artificial Generals Intelligence: Mastering Generals.io with Reinforcement Learning2025-07-09SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning2025-06-30The Decrypto Benchmark for Multi-Agent Reasoning and Theory of Mind2025-06-25Learning Bilateral Team Formation in Cooperative Multi-Agent Reinforcement Learning2025-06-24Center of Gravity-Guided Focusing Influence Mechanism for Multi-Agent Reinforcement Learning2025-06-24Transformer World Model for Sample Efficient Multi-Agent Reinforcement Learning2025-06-23