TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tas...

MSVD-Indonesian: A Benchmark for Multimodal Video-Text Tasks in Indonesian

Willy Fitra Hendria

2023-06-20Video RetrievalText RetrievalCross-Lingual TransferTransfer LearningText to Video RetrievalVideo CaptioningVideo DescriptionRetrievalVideo to Text Retrieval
PaperPDFCode(official)

Abstract

Multimodal learning on video and text data has been receiving growing attention from many researchers in various research tasks, including text-to-video retrieval, video-to-text retrieval, and video captioning. Although many algorithms have been proposed for those challenging tasks, most of them are developed on English language datasets. Despite Indonesian being one of the most spoken languages in the world, the research progress on the multimodal video-text with Indonesian sentences is still under-explored, likely due to the absence of the public benchmark dataset. To address this issue, we construct the first public Indonesian video-text dataset by translating English sentences from the MSVD dataset to Indonesian sentences. Using our dataset, we then train neural network models which were developed for the English video-text dataset on three tasks, i.e., text-to-video retrieval, video-to-text retrieval, and video captioning. The recent neural network-based approaches to video-text tasks often utilized a feature extractor that is primarily pretrained on an English vision-language dataset. Since the availability of the pretraining resources with Indonesian sentences is relatively limited, the applicability of those approaches to our dataset is still questionable. To overcome the lack of pretraining resources, we apply cross-lingual transfer learning by utilizing the feature extractors pretrained on the English dataset, and we then fine-tune the models on our Indonesian dataset. Our experimental results show that this approach can help to improve the performance for the three tasks on all metrics. Finally, we discuss potential future works using our dataset, inspiring further research in the Indonesian multimodal video-text tasks. We believe that our dataset and our experimental results could provide valuable contributions to the community. Our dataset is available on GitHub.

Results

TaskDatasetMetricValueModel
VideoMSVD-Indonesiantext-to-video Mean Rank17.5X-CLIP (Cross-Lingual)
VideoMSVD-Indonesiantext-to-video Median Rank3X-CLIP (Cross-Lingual)
VideoMSVD-Indonesiantext-to-video R@132.3X-CLIP (Cross-Lingual)
VideoMSVD-Indonesiantext-to-video R@1074.9X-CLIP (Cross-Lingual)
VideoMSVD-Indonesiantext-to-video R@562.3X-CLIP (Cross-Lingual)
VideoMSVD-Indonesianvideo-to-text Mean Rank6.4X-CLIP (Cross-Lingual)
VideoMSVD-Indonesianvideo-to-text Median Rank2X-CLIP (Cross-Lingual)
VideoMSVD-Indonesianvideo-to-text R@144.9X-CLIP (Cross-Lingual)
VideoMSVD-Indonesianvideo-to-text R@1088.8X-CLIP (Cross-Lingual)
VideoMSVD-Indonesianvideo-to-text R@577.6X-CLIP (Cross-Lingual)
Video CaptioningMSVD-IndonesianBLEU-458.68VNS-GRU (Cross-Lingual)
Video CaptioningMSVD-IndonesianCIDEr126.65VNS-GRU (Cross-Lingual)
Video CaptioningMSVD-IndonesianMETEOR40.33VNS-GRU (Cross-Lingual)
Video CaptioningMSVD-IndonesianROUGE-L76.84VNS-GRU (Cross-Lingual)
Video RetrievalMSVD-Indonesiantext-to-video Mean Rank17.5X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesiantext-to-video Median Rank3X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesiantext-to-video R@132.3X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesiantext-to-video R@1074.9X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesiantext-to-video R@562.3X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesianvideo-to-text Mean Rank6.4X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesianvideo-to-text Median Rank2X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesianvideo-to-text R@144.9X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesianvideo-to-text R@1088.8X-CLIP (Cross-Lingual)
Video RetrievalMSVD-Indonesianvideo-to-text R@577.6X-CLIP (Cross-Lingual)
Text to Video RetrievalMSVD-IndonesianMean Rank17.5X-CLIP (Cross-Lingual)
Text to Video RetrievalMSVD-IndonesianMedian Rank3X-CLIP (Cross-Lingual)
Text to Video RetrievalMSVD-IndonesianR@132.3X-CLIP (Cross-Lingual)
Text to Video RetrievalMSVD-IndonesianR@1074.9X-CLIP (Cross-Lingual)
Text to Video RetrievalMSVD-IndonesianR@563.3X-CLIP (Cross-Lingual)
10-shot image generationMSVD-IndonesianMean Rank17.5X-CLIP (Cross-Lingual)
10-shot image generationMSVD-IndonesianMedian Rank3X-CLIP (Cross-Lingual)
10-shot image generationMSVD-IndonesianR@132.3X-CLIP (Cross-Lingual)
10-shot image generationMSVD-IndonesianR@1074.9X-CLIP (Cross-Lingual)
10-shot image generationMSVD-IndonesianR@563.3X-CLIP (Cross-Lingual)

Related Papers

RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Enhancing Cross-task Transfer of Large Language Models via Activation Steering2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17HapticCap: A Multimodal Dataset and Task for Understanding User Experience of Vibration Haptic Signals2025-07-17A Survey of Context Engineering for Large Language Models2025-07-17MCoT-RE: Multi-Faceted Chain-of-Thought and Re-Ranking for Training-Free Zero-Shot Composed Image Retrieval2025-07-17Best Practices for Large-Scale, Pixel-Wise Crop Mapping and Transfer Learning Workflows2025-07-16