TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Qwen-Audio: Advancing Universal Audio Understanding via Un...

Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models

Yunfei Chu, Jin Xu, Xiaohuan Zhou, Qian Yang, Shiliang Zhang, Zhijie Yan, Chang Zhou, Jingren Zhou

2023-11-14Speech RecognitionEmotion Recognition in ConversationInstruction FollowingAutomatic Speech Recognition (ASR)Audio ClassificationAudio captioningAcoustic Scene Classification
PaperPDFCode(official)Code

Abstract

Recently, instruction-following audio-language models have received broad attention for audio interaction with humans. However, the absence of pre-trained audio models capable of handling diverse audio types and tasks has hindered progress in this field. Consequently, most existing works have only been able to support a limited range of interaction capabilities. In this paper, we develop the Qwen-Audio model and address this limitation by scaling up audio-language pre-training to cover over 30 tasks and various audio types, such as human speech, natural sounds, music, and songs, to facilitate universal audio understanding abilities. However, directly co-training all tasks and datasets can lead to interference issues, as the textual labels associated with different datasets exhibit considerable variations due to differences in task focus, language, granularity of annotation, and text structure. To overcome the one-to-many interference, we carefully design a multi-task training framework by conditioning on a sequence of hierarchical tags to the decoder for encouraging knowledge sharing and avoiding interference through shared and specified tags respectively. Remarkably, Qwen-Audio achieves impressive performance across diverse benchmark tasks without requiring any task-specific fine-tuning, surpassing its counterparts. Building upon the capabilities of Qwen-Audio, we further develop Qwen-Audio-Chat, which allows for input from various audios and text inputs, enabling multi-turn dialogues and supporting various audio-central scenarios.

Results

TaskDatasetMetricValueModel
Speech RecognitionAISHELL-2 Test AndroidWord Error Rate (WER)3.3Qwen-Audio
Speech RecognitionAISHELL-2 Test IOSWord Error Rate (WER)3.1Qwen-Audio
Speech RecognitionAISHELL-2 Test MicWord Error Rate (WER)3.3Qwen-Audio
Speech RecognitionLibriSpeech test-cleanWord Error Rate (WER)2Qwen-Audio
Speech RecognitionLibriSpeech test-otherWord Error Rate (WER)4.2Qwen-Audio
Speech RecognitionAISHELL-1Word Error Rate (WER)1.29Qwen-Audio
Emotion RecognitionMELDAccuracy55.7Qwen-Audio
Audio ClassificationVocalSoundAccuracy 92.89Qwen-Audio
Acoustic Scene ClassificationTUT Acoustic Scenes 20171:1 Accuracy0.649Qwen-Audio
Acoustic Scene ClassificationCochlScene1:1 Accuracy0.795Qwen-Audio
Audio captioningClothoCIDEr0.441Qwen-Audio
Audio captioningClothoSPICE0.136Qwen-Audio
Audio captioningClothoSPIDEr0.288Qwen-Audio
ClassificationVocalSoundAccuracy 92.89Qwen-Audio

Related Papers

Long-Short Distance Graph Neural Networks and Improved Curriculum Learning for Emotion Recognition in Conversation2025-07-21Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17AnyCap Project: A Unified Framework, Dataset, and Benchmark for Controllable Omni-modal Captioning2025-07-17MUPAX: Multidimensional Problem Agnostic eXplainable AI2025-07-17How Many Instructions Can LLMs Follow at Once?2025-07-15DrafterBench: Benchmarking Large Language Models for Tasks Automation in Civil Engineering2025-07-15WhisperKit: On-device Real-time ASR with Billion-Scale Transformers2025-07-14