TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Monotonic Value Function Factorisation for Deep Multi-Agen...

Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson

2020-03-19Reinforcement LearningSMAC+Multi-agent Reinforcement LearningStarcraftSMACreinforcement-learning
PaperPDFCode(official)

Abstract

In many real-world settings, a team of agents must coordinate its behaviour while acting in a decentralised fashion. At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a mixing network that estimates joint action-values as a monotonic combination of per-agent values. We structurally enforce that the joint-action value is monotonic in the per-agent values, through the use of non-negative weights in the mixing network, which guarantees consistency between the centralised and decentralised policies. To evaluate the performance of QMIX, we propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning. We evaluate QMIX on a challenging set of SMAC scenarios and show that it significantly outperforms existing multi-agent reinforcement learning methods.

Results

TaskDatasetMetricValueModel
Multi-agent Reinforcement LearningSMAC 3s5z_vs_3s6zMedian Win Rate2QMIX
Multi-agent Reinforcement LearningSMAC corridorMedian Win Rate1QMIX
Multi-agent Reinforcement LearningSMAC corridorMedian Win Rate1QMIX
Multi-agent Reinforcement LearningSMAC MMM2Median Win Rate69QMIX
Multi-agent Reinforcement LearningSMAC MMM2Median Win Rate69QMIX
Multi-agent Reinforcement LearningSMAC 6h_vs_8zMedian Win Rate3QMIX
Multi-agent Reinforcement LearningSMAC 6h_vs_8zMedian Win Rate3QMIX
Multi-agent Reinforcement LearningSMAC 27m_vs_30mMedian Win Rate49QMIX
Multi-agent Reinforcement LearningSMAC 27m_vs_30mMedian Win Rate49QMIX
SMACSMAC 3s5z_vs_3s6zMedian Win Rate2QMIX
SMACSMAC corridorMedian Win Rate1QMIX
SMACSMAC corridorMedian Win Rate1QMIX
SMACSMAC MMM2Median Win Rate69QMIX
SMACSMAC MMM2Median Win Rate69QMIX
SMACSMAC 6h_vs_8zMedian Win Rate3QMIX
SMACSMAC 6h_vs_8zMedian Win Rate3QMIX
SMACSMAC 27m_vs_30mMedian Win Rate49QMIX
SMACSMAC 27m_vs_30mMedian Win Rate49QMIX

Related Papers

One Step is Enough: Multi-Agent Reinforcement Learning based on One-Step Policy Optimization for Order Dispatch on Ride-Sharing Platforms2025-07-21CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17