TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Multi-Task Learning as a Bargaining Game

Multi-Task Learning as a Bargaining Game

Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya

2022-02-02Multi-Task Learning
PaperPDFCode(official)CodeCodeCode

Abstract

In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks. Joint training reduces computation costs and improves data efficiency; however, since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts. A common method for alleviating this issue is to combine per-task gradients into a joint update direction using a particular heuristic. In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update. Under certain assumptions, the bargaining problem has a unique solution, known as the Nash Bargaining Solution, which we propose to use as a principled approach to multi-task learning. We describe a new MTL optimization procedure, Nash-MTL, and derive theoretical guarantees for its convergence. Empirically, we show that Nash-MTL achieves state-of-the-art results on multiple MTL benchmarks in various domains.

Results

TaskDatasetMetricValueModel
Transfer LearningNYUv2Mean IoU40.13Nash-MTL
Transfer LearningCityscapes testmIoU75.41Nash-MTL
Transfer LearningQM9∆m%62Nash-MTL
Multi-Task LearningNYUv2Mean IoU40.13Nash-MTL
Multi-Task LearningCityscapes testmIoU75.41Nash-MTL
Multi-Task LearningQM9∆m%62Nash-MTL

Related Papers

SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Robust-Multi-Task Gradient Boosting2025-07-15SAMO: A Lightweight Sharpness-Aware Approach for Multi-Task Optimization with Joint Global-Local Perturbation2025-07-10Opportunistic Osteoporosis Diagnosis via Texture-Preserving Self-Supervision, Mixture of Experts and Multi-Task Integration2025-06-25AnchorDP3: 3D Affordance Guided Sparse Diffusion Policy for Robotic Manipulation2025-06-24An Audio-centric Multi-task Learning Framework for Streaming Ads Targeting on Spotify2025-06-23SonicVerse: Multi-Task Learning for Music Feature-Informed Captioning2025-06-18Leader360V: The Large-scale, Real-world 360 Video Dataset for Multi-task Learning in Diverse Environment2025-06-17