TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/DA-Net: A Disentangled and Adaptive Network for Multi-Sour...

DA-Net: A Disentangled and Adaptive Network for Multi-Source Cross-Lingual Transfer Learning

Ling Ge, Chunming Hu, Guanghui Ma, Jihong Liu, Hong Zhang

2024-03-07Retinal Vessel SegmentationDisentanglementCross-Lingual TransferTransfer Learning
PaperPDF

Abstract

Multi-Source cross-lingual transfer learning deals with the transfer of task knowledge from multiple labelled source languages to an unlabeled target language under the language shift. Existing methods typically focus on weighting the predictions produced by language-specific classifiers of different sources that follow a shared encoder. However, all source languages share the same encoder, which is updated by all these languages. The extracted representations inevitably contain different source languages' information, which may disturb the learning of the language-specific classifiers. Additionally, due to the language gap, language-specific classifiers trained with source labels are unable to make accurate predictions for the target language. Both facts impair the model's performance. To address these challenges, we propose a Disentangled and Adaptive Network (DA-Net). Firstly, we devise a feedback-guided collaborative disentanglement method that seeks to purify input representations of classifiers, thereby mitigating mutual interference from multiple sources. Secondly, we propose a class-aware parallel adaptation method that aligns class-level distributions for each source-target language pair, thereby alleviating the language pairs' language gap. Experimental results on three different tasks involving 38 languages validate the effectiveness of our approach.

Results

TaskDatasetMetricValueModel
Medical Image SegmentationDRIVEAUC0.9846DA-Net
Medical Image SegmentationDRIVEAccuracy0.8082DA-Net
Medical Image SegmentationDRIVEF1 score0.8193DA-Net
Medical Image SegmentationDRIVESpecificity0.9803DA-Net
Medical Image SegmentationDRIVEsensitivity0.8307DA-Net
Retinal Vessel SegmentationDRIVEAUC0.9846DA-Net
Retinal Vessel SegmentationDRIVEAccuracy0.8082DA-Net
Retinal Vessel SegmentationDRIVEF1 score0.8193DA-Net
Retinal Vessel SegmentationDRIVESpecificity0.9803DA-Net
Retinal Vessel SegmentationDRIVEsensitivity0.8307DA-Net

Related Papers

CSD-VAR: Content-Style Decomposition in Visual Autoregressive Models2025-07-18RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Enhancing Cross-task Transfer of Large Language Models via Activation Steering2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Robust-Multi-Task Gradient Boosting2025-07-15Calibrated and Robust Foundation Models for Vision-Language and Medical Image Tasks Under Distribution Shift2025-07-12