TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/The StarCraft Multi-Agent Challenges+ : Learning of Multi-...

The StarCraft Multi-Agent Challenges+ : Learning of Multi-Stage Tasks and Environmental Factors without Precise Reward Functions

Mingyu Kim, Jihwan Oh, Yongsik Lee, Joonkee Kim, SeongHwan Kim, Song Chong, Se-Young Yun

2022-07-05SMAC+Multi-agent Reinforcement Learning
PaperPDFCode(official)

Abstract

In this paper, we propose a novel benchmark called the StarCraft Multi-Agent Challenges+, where agents learn to perform multi-stage tasks and to use environmental factors without precise reward functions. The previous challenges (SMAC) recognized as a standard benchmark of Multi-Agent Reinforcement Learning are mainly concerned with ensuring that all agents cooperatively eliminate approaching adversaries only through fine manipulation with obvious reward functions. This challenge, on the other hand, is interested in the exploration capability of MARL algorithms to efficiently learn implicit multi-stage tasks and environmental factors as well as micro-control. This study covers both offensive and defensive scenarios. In the offensive scenarios, agents must learn to first find opponents and then eliminate them. The defensive scenarios require agents to use topographic features. For example, agents need to position themselves behind protective structures to make it harder for enemies to attack. We investigate MARL algorithms under SMAC+ and observe that recent approaches work well in similar settings to the previous challenges, but misbehave in offensive scenarios. Additionally, we observe that an enhanced exploration approach has a positive effect on performance but is not able to completely solve all scenarios. This study proposes new directions for future research.

Results

TaskDatasetMetricValueModel
Multi-agent Reinforcement LearningDef_Infantry_parallelMedian Win Rate40IQL
Multi-agent Reinforcement LearningDef_Armored_sequentialMedian Win Rate9.4IQL
Multi-agent Reinforcement LearningDef_Infantry_sequentialMedian Win Rate93.8IQL
SMACDef_Infantry_parallelMedian Win Rate40IQL
SMACDef_Armored_sequentialMedian Win Rate9.4IQL
SMACDef_Infantry_sequentialMedian Win Rate93.8IQL

Related Papers

One Step is Enough: Multi-Agent Reinforcement Learning based on One-Step Policy Optimization for Order Dispatch on Ride-Sharing Platforms2025-07-21A Learning Framework For Cooperative Collision Avoidance of UAV Swarms Leveraging Domain Knowledge2025-07-15Artificial Generals Intelligence: Mastering Generals.io with Reinforcement Learning2025-07-09SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning2025-06-30The Decrypto Benchmark for Multi-Agent Reasoning and Theory of Mind2025-06-25Learning Bilateral Team Formation in Cooperative Multi-Agent Reinforcement Learning2025-06-24Center of Gravity-Guided Focusing Influence Mechanism for Multi-Agent Reinforcement Learning2025-06-24Transformer World Model for Sample Efficient Multi-Agent Reinforcement Learning2025-06-23