TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Lyra: An Efficient and Speech-Centric Framework for Omni-C...

Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition

Zhisheng Zhong, Chengyao Wang, Yuqi Liu, Senqiao Yang, Longxiang Tang, Yuechen Zhang, Jingyao Li, Tianyuan Qu, Yanwei Li, Yukang Chen, Shaozuo Yu, Sitong Wu, Eric Lo, Shu Liu, Jiaya Jia

2024-12-12Visual Question Answering (VQA)Visual Question Answering
PaperPDFCode(official)

Abstract

As Multi-modal Large Language Models (MLLMs) evolve, expanding beyond single-domain capabilities is essential to meet the demands for more versatile and efficient AI. However, previous omni-models have insufficiently explored speech, neglecting its integration with multi-modality. We introduce Lyra, an efficient MLLM that enhances multimodal abilities, including advanced long-speech comprehension, sound understanding, cross-modality efficiency, and seamless speech interaction. To achieve efficiency and speech-centric capabilities, Lyra employs three strategies: (1) leveraging existing open-source large models and a proposed multi-modality LoRA to reduce training costs and data requirements; (2) using a latent multi-modality regularizer and extractor to strengthen the relationship between speech and other modalities, thereby enhancing model performance; and (3) constructing a high-quality, extensive dataset that includes 1.5M multi-modal (language, vision, audio) data samples and 12K long speech samples, enabling Lyra to handle complex long speech inputs and achieve more robust omni-cognition. Compared to other omni-methods, Lyra achieves state-of-the-art performance on various vision-language, vision-speech, and speech-language benchmarks, while also using fewer computational resources and less training data.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)MM-VetAcc71.4Lyra-Pro
Visual Question Answering (VQA)MMEAcc2485Lyra-Pro
Visual Question Answering (VQA)EgoSchemaAcc75.8Lyra-Pro
Visual Question Answering (VQA)Video MMEAcc69.9Lyra-Pro
Visual Question Answering (VQA)TextVQAAcc83.5Lyra-Pro
Visual Question Answering (VQA)MVBenchAcc72.3Lyra-Pro
Visual Question Answering (VQA)MM-VetGPT-4 score71.4Lyra-Pro
Visual Question Answering (VQA)MM-VetGPT-4 score63.5Lyra-Base
Visual Question Answering (VQA)MM-VetGPT-4 score51.2Lyra-Mini
Visual Question AnsweringMM-VetGPT-4 score71.4Lyra-Pro
Visual Question AnsweringMM-VetGPT-4 score63.5Lyra-Base
Visual Question AnsweringMM-VetGPT-4 score51.2Lyra-Mini

Related Papers

VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17MGFFD-VLM: Multi-Granularity Prompt Learning for Face Forgery Detection with VLM2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Evaluating Attribute Confusion in Fashion Text-to-Image Generation2025-07-09LinguaMark: Do Multimodal Models Speak Fairly? A Benchmark-Based Evaluation2025-07-09Barriers in Integrating Medical Visual Question Answering into Radiology Workflows: A Scoping Review and Clinicians' Insights2025-07-09MagiC: Evaluating Multimodal Cognition Toward Grounded Visual Reasoning2025-07-09Enhancing Scientific Visual Question Answering through Multimodal Reasoning and Ensemble Modeling2025-07-08