Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson
In many real-world settings, a team of agents must coordinate their behaviour while acting in a decentralised way. At the same time, it is often possible to train the agents in a centralised fashion in a simulated or laboratory setting, where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a network that estimates joint action-values as a complex non-linear combination of per-agent values that condition only on local observations. We structurally enforce that the joint-action value is monotonic in the per-agent values, which allows tractable maximisation of the joint action-value in off-policy learning, and guarantees consistency between the centralised and decentralised policies. We evaluate QMIX on a challenging set of StarCraft II micromanagement tasks, and show that QMIX significantly outperforms existing value-based multi-agent reinforcement learning methods.
| Task | Dataset | Metric | Value | Model |
|---|---|---|---|---|
| Multi-agent Reinforcement Learning | Off_Near_sequential | Median Win Rate | 90.6 | QMIX |
| Multi-agent Reinforcement Learning | Off_Complicated_sequential | Median Win Rate | 87.5 | QMIX |
| Multi-agent Reinforcement Learning | Off_Near_parallel | Median Win Rate | 95 | QMIX |
| Multi-agent Reinforcement Learning | Def_Armored_parallel | Median Win Rate | 75 | QMIX |
| Multi-agent Reinforcement Learning | Def_Infantry_parallel | Median Win Rate | 95 | QMIX |
| Multi-agent Reinforcement Learning | Off_Distant_sequential | Median Win Rate | 93.8 | QMIX |
| Multi-agent Reinforcement Learning | Def_Outnumbered_parallel | Median Win Rate | 30 | QMIX |
| Multi-agent Reinforcement Learning | Def_Infantry_sequential | Median Win Rate | 96.9 | QMIX |
| Multi-agent Reinforcement Learning | Off_Hard_sequential | Median Win Rate | 96.9 | QMIX |
| SMAC | Off_Near_sequential | Median Win Rate | 90.6 | QMIX |
| SMAC | Off_Complicated_sequential | Median Win Rate | 87.5 | QMIX |
| SMAC | Off_Near_parallel | Median Win Rate | 95 | QMIX |
| SMAC | Def_Armored_parallel | Median Win Rate | 75 | QMIX |
| SMAC | Def_Infantry_parallel | Median Win Rate | 95 | QMIX |
| SMAC | Off_Distant_sequential | Median Win Rate | 93.8 | QMIX |
| SMAC | Def_Outnumbered_parallel | Median Win Rate | 30 | QMIX |
| SMAC | Def_Infantry_sequential | Median Win Rate | 96.9 | QMIX |
| SMAC | Off_Hard_sequential | Median Win Rate | 96.9 | QMIX |