TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Bayesian Uncertainty for Gradient Aggregation in Multi-Tas...

Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning

Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya

2024-02-06Bayesian InferenceMulti-Task Learning
PaperPDFCode(official)

Abstract

As machine learning becomes more prominent there is a growing demand to perform several inference tasks in parallel. Running a dedicated model for each task is computationally expensive and therefore there is a great interest in multi-task learning (MTL). MTL aims at learning a single model that solves several tasks efficiently. Optimizing MTL models is often achieved by computing a single gradient per task and aggregating them for obtaining a combined update direction. However, these approaches do not consider an important aspect, the sensitivity in the gradient dimensions. Here, we introduce a novel gradient aggregation approach using Bayesian inference. We place a probability distribution over the task-specific parameters, which in turn induce a distribution over the gradients of the tasks. This additional valuable information allows us to quantify the uncertainty in each of the gradients dimensions, which can then be factored in when aggregating them. We empirically demonstrate the benefits of our approach in a variety of datasets, achieving state-of-the-art performance.

Results

TaskDatasetMetricValueModel
Transfer LearningUTKFacedelta_m-2.23BayesAgg-MTL
Transfer LearningQM9∆m%53.7BayesAgg-MTL
Multi-Task LearningUTKFacedelta_m-2.23BayesAgg-MTL
Multi-Task LearningQM9∆m%53.7BayesAgg-MTL

Related Papers

SGCL: Unifying Self-Supervised and Supervised Learning for Graph Recommendation2025-07-17Robust-Multi-Task Gradient Boosting2025-07-15A Simple Approximate Bayesian Inference Neural Surrogate for Stochastic Petri Net Models2025-07-14The Bayesian Approach to Continual Learning: An Overview2025-07-11SAMO: A Lightweight Sharpness-Aware Approach for Multi-Task Optimization with Joint Global-Local Perturbation2025-07-10Estimating Interventional Distributions with Uncertain Causal Graphs through Meta-Learning2025-07-07Scalable Bayesian Low-Rank Adaptation of Large Language Models via Stochastic Variational Subspace Inference2025-06-26Opportunistic Osteoporosis Diagnosis via Texture-Preserving Self-Supervision, Mixture of Experts and Multi-Task Integration2025-06-25