TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/QMIX: Monotonic Value Function Factorisation for Deep Mult...

QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson

2018-03-30ICML 2018 7Reinforcement LearningSMAC+Starcraft IIMulti-agent Reinforcement LearningStarcraftreinforcement-learning
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode(official)CodeCode

Abstract

In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.

Results

TaskDatasetMetricValueModel
Multi-agent Reinforcement LearningOff_Near_sequentialMedian Win Rate90.6QMIX
Multi-agent Reinforcement LearningOff_Complicated_sequentialMedian Win Rate87.5QMIX
Multi-agent Reinforcement LearningOff_Near_parallelMedian Win Rate95QMIX
Multi-agent Reinforcement LearningDef_Armored_parallelMedian Win Rate75QMIX
Multi-agent Reinforcement LearningDef_Infantry_parallelMedian Win Rate95QMIX
Multi-agent Reinforcement LearningOff_Distant_sequentialMedian Win Rate93.8QMIX
Multi-agent Reinforcement LearningDef_Outnumbered_parallelMedian Win Rate30QMIX
Multi-agent Reinforcement LearningDef_Infantry_sequentialMedian Win Rate96.9QMIX
Multi-agent Reinforcement LearningOff_Hard_sequentialMedian Win Rate96.9QMIX
SMACOff_Near_sequentialMedian Win Rate90.6QMIX
SMACOff_Complicated_sequentialMedian Win Rate87.5QMIX
SMACOff_Near_parallelMedian Win Rate95QMIX
SMACDef_Armored_parallelMedian Win Rate75QMIX
SMACDef_Infantry_parallelMedian Win Rate95QMIX
SMACOff_Distant_sequentialMedian Win Rate93.8QMIX
SMACDef_Outnumbered_parallelMedian Win Rate30QMIX
SMACDef_Infantry_sequentialMedian Win Rate96.9QMIX
SMACOff_Hard_sequentialMedian Win Rate96.9QMIX

Related Papers

One Step is Enough: Multi-Agent Reinforcement Learning based on One-Step Policy Optimization for Order Dispatch on Ride-Sharing Platforms2025-07-21CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning2025-07-18VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Aligning Humans and Robots via Reinforcement Learning from Implicit Human Feedback2025-07-17VAR-MATH: Probing True Mathematical Reasoning in Large Language Models via Symbolic Multi-Instance Benchmarks2025-07-17QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17