TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Distill the Image to Nowhere: Inversion Knowledge Distilla...

Distill the Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation

Ru Peng, Yawen Zeng, Junbo Zhao

2022-10-10Machine TranslationMultimodal Machine TranslationNMTTranslationKnowledge Distillation
PaperPDFCode(official)

Abstract

Past works on multimodal machine translation (MMT) elevate bilingual setup by incorporating additional aligned vision information. However, an image-must requirement of the multimodal dataset largely hinders MMT's development -- namely that it demands an aligned form of [image, source text, target text]. This limitation is generally troublesome during the inference phase especially when the aligned image is not provided as in the normal NMT setup. Thus, in this work, we introduce IKD-MMT, a novel MMT framework to support the image-free inference phase via an inversion knowledge distillation scheme. In particular, a multimodal feature generator is executed with a knowledge distillation module, which directly generates the multimodal feature from (only) source texts as the input. While there have been a few prior works entertaining the possibility to support image-free inference for machine translation, their performances have yet to rival the image-must translation. In our experiments, we identify our method as the first image-free approach to comprehensively rival or even surpass (almost) all image-must frameworks, and achieved the state-of-the-art result on the often-used Multi30k benchmark. Our code and data are available at: https://github.com/pengr/IKD-mmt/tree/master..

Results

TaskDatasetMetricValueModel
Machine TranslationMulti30KBLEU (EN-DE)41.28IKD-MMT
Machine TranslationMulti30KMeteor (EN-DE)58.93IKD-MMT
Machine TranslationMulti30KMeteor (EN-FR)77.2IKD-MMT
Multimodal Machine TranslationMulti30KBLEU (EN-DE)41.28IKD-MMT
Multimodal Machine TranslationMulti30KMeteor (EN-DE)58.93IKD-MMT
Multimodal Machine TranslationMulti30KMeteor (EN-FR)77.2IKD-MMT

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21A Translation of Probabilistic Event Calculus into Markov Decision Processes2025-07-17Uncertainty-Aware Cross-Modal Knowledge Distillation with Prototype Learning for Multimodal Brain-Computer Interfaces2025-07-17DVFL-Net: A Lightweight Distilled Video Focal Modulation Network for Spatio-Temporal Action Recognition2025-07-16Function-to-Style Guidance of LLMs for Code Translation2025-07-15HanjaBridge: Resolving Semantic Ambiguity in Korean LLMs via Hanja-Augmented Pre-Training2025-07-15Feature Distillation is the Better Choice for Model-Heterogeneous Federated Learning2025-07-14KAT-V1: Kwai-AutoThink Technical Report2025-07-11