TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsource...

CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription

Nikita Pavlichenko, Ivan Stelmakh, Dmitry Ustalov

2021-07-02Crowdsourced Text Aggregationspeech-recognition
PaperPDFCode(official)

Abstract

Domain-specific data is the crux of the successful transfer of machine learning systems from benchmarks to real life. In simple problems such as image classification, crowdsourcing has become one of the standard tools for cheap and time-efficient data collection: thanks in large part to advances in research on aggregation methods. However, the applicability of crowdsourcing to more complex tasks (e.g., speech recognition) remains limited due to the lack of principled aggregation methods for these modalities. The main obstacle towards designing aggregation methods for more advanced applications is the absence of training data, and in this work, we focus on bridging this gap in speech recognition. For this, we collect and release CrowdSpeech -- the first publicly available large-scale dataset of crowdsourced audio transcriptions. Evaluation of existing and novel aggregation methods on our data shows room for improvement, suggesting that our work may entail the design of better algorithms. At a higher level, we also contribute to the more general challenge of developing the methodology for reliable data collection via crowdsourcing. In that, we design a principled pipeline for constructing datasets of crowdsourced audio transcriptions in any novel domain. We show its applicability on an under-resourced language by constructing VoxDIY -- a counterpart of CrowdSpeech for the Russian language. We also release the code that allows a full replication of our data collection pipeline and share various insights on best practices of data collection via crowdsourcing.

Results

TaskDatasetMetricValueModel
Crowdsourced Text AggregationCrowdSpeech test-otherWord Error Rate (WER)13.41ROVER
Crowdsourced Text AggregationCrowdSpeech test-otherWord Error Rate (WER)15.66HRRASA
Crowdsourced Text AggregationCrowdSpeech test-otherWord Error Rate (WER)15.67RASA
Crowdsourced Text AggregationCrowdSpeech test-cleanWord Error Rate (WER)7.29ROVER
Crowdsourced Text AggregationCrowdSpeech test-cleanWord Error Rate (WER)8.59HRRASA
Crowdsourced Text AggregationCrowdSpeech test-cleanWord Error Rate (WER)8.6RASA

Related Papers

Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17WhisperKit: On-device Real-time ASR with Billion-Scale Transformers2025-07-14VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis2025-07-08A Hybrid Machine Learning Framework for Optimizing Crop Selection via Agronomic and Economic Forecasting2025-07-06First Steps Towards Voice Anonymization for Code-Switching Speech2025-07-02MambAttention: Mamba with Multi-Head Attention for Generalizable Single-Channel Speech Enhancement2025-07-01AUTOMATIC PRONUNCIATION MISTAKE DETECTOR PROJECT REPORT2025-06-25