TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/An Evolutionary Approach to Dynamic Introduction of Tasks ...

An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems

Andrea Gesmundo, Jeff Dean

2022-05-25Continual LearningImage ClassificationTransfer LearningFine-Grained Image Classification
PaperPDFCode(official)

Abstract

Multitask learning assumes that models capable of learning from multiple tasks can achieve better quality and efficiency via knowledge transfer, a key feature of human learning. Though, state of the art ML models rely on high customization for each task and leverage size and data scale rather than scaling the number of tasks. Also, continual learning, that adds the temporal aspect to multitask, is often focused to the study of common pitfalls such as catastrophic forgetting instead of being studied at a large scale as a critical component to build the next generation artificial intelligence.We propose an evolutionary method capable of generating large scale multitask models that support the dynamic addition of new tasks. The generated multitask models are sparsely activated and integrates a task-based routing that guarantees bounded compute cost and fewer added parameters per task as the model expands.The proposed method relies on a knowledge compartmentalization technique to achieve immunity against catastrophic forgetting and other common pitfalls such as gradient interference and negative transfer. We demonstrate empirically that the proposed method can jointly solve and achieve competitive results on 69public image classification tasks, for example improving the state of the art on a competitive benchmark such as cifar10 by achieving a 15% relative error reduction compared to the best model trained on public data.

Results

TaskDatasetMetricValueModel
Image ClassificationKMNISTAccuracy98.68µ2Net (ViT-L/16)
Image ClassificationDTDAccuracy81µ2Net (ViT-L/16)
Image ClassificationCIFAR-10Percentage correct99.49µ2Net (ViT-L/16)
Image ClassificationEMNIST-DigitsAccuracy (%)99.82µ2Net (ViT-L/16)
Image ClassificationCIFAR-100Percentage correct94.95µ2Net (ViT-L/16)
Image ClassificationMNISTAccuracy99.75µ2Net (ViT-L/16)
Image ClassificationEuroSATAccuracy (%)99.2µ2Net (ViT-L/16)
Image ClassificationOxford-IIIT PetsAccuracy95.3µ2Net (ViT-L/16)
Image ClassificationSUN397Accuracy84.8µ2Net (ViT-L/16)
Fine-Grained Image ClassificationOxford-IIIT PetsAccuracy95.3µ2Net (ViT-L/16)
Fine-Grained Image ClassificationSUN397Accuracy84.8µ2Net (ViT-L/16)

Related Papers

Automatic Classification and Segmentation of Tunnel Cracks Based on Deep Learning and Visual Explanations2025-07-18RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Adversarial attacks to image classification systems using evolutionary algorithms2025-07-17Efficient Adaptation of Pre-trained Vision Transformer underpinned by Approximately Orthogonal Fine-Tuning Strategy2025-07-17Federated Learning for Commercial Image Sources2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17RegCL: Continual Adaptation of Segment Anything Model via Model Merging2025-07-16