TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/OmniDataComposer: A Unified Data Structure for Multimodal ...

OmniDataComposer: A Unified Data Structure for Multimodal Data Fusion and Infinite Data Generation

Dongyang Yu, Shihao Wang, Yuan Fang, Wangpeng An

2023-08-08Speech RecognitionZero-Shot Video Question AnswerAutomatic Speech RecognitionQuestion AnsweringAutomatic Speech Recognition (ASR)speech-recognitionVideo CaptioningObject TrackingOptical Character Recognition (OCR)
PaperPDFCode(official)

Abstract

This paper presents OmniDataComposer, an innovative approach for multimodal data fusion and unlimited data generation with an intent to refine and uncomplicate interplay among diverse data modalities. Coming to the core breakthrough, it introduces a cohesive data structure proficient in processing and merging multimodal data inputs, which include video, audio, and text. Our crafted algorithm leverages advancements across multiple operations such as video/image caption extraction, dense caption extraction, Automatic Speech Recognition (ASR), Optical Character Recognition (OCR), Recognize Anything Model(RAM), and object tracking. OmniDataComposer is capable of identifying over 6400 categories of objects, substantially broadening the spectrum of visual information. It amalgamates these diverse modalities, promoting reciprocal enhancement among modalities and facilitating cross-modal data correction. \textbf{The final output metamorphoses each video input into an elaborate sequential document}, virtually transmuting videos into thorough narratives, making them easier to be processed by large language models. Future prospects include optimizing datasets for each modality to encourage unlimited data generation. This robust base will offer priceless insights to models like ChatGPT, enabling them to create higher quality datasets for video captioning and easing question-answering tasks based on video content. OmniDataComposer inaugurates a new stage in multimodal learning, imparting enormous potential for augmenting AI's understanding and generation of complex, real-world data.

Results

TaskDatasetMetricValueModel
Question AnsweringMSRVTT-QAAccuracy55.3Omni-VideoAssistant
Question AnsweringMSRVTT-QAConfidence Score3.3Omni-VideoAssistant
Video Question AnsweringMSRVTT-QAAccuracy55.3Omni-VideoAssistant
Video Question AnsweringMSRVTT-QAConfidence Score3.3Omni-VideoAssistant

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17MVA 2025 Small Multi-Object Tracking for Spotting Birds Challenge: Dataset, Methods, and Results2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17