TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/From Simulated Mixtures to Simulated Conversations as Trai...

From Simulated Mixtures to Simulated Conversations as Training Data for End-to-End Neural Diarization

Federico Landini, Alicia Lozano-Diez, Mireia Diez, Lukáš Burget

2022-04-02Speaker Diarization
PaperPDFCode(official)Code(official)

Abstract

End-to-end neural diarization (EEND) is nowadays one of the most prominent research topics in speaker diarization. EEND presents an attractive alternative to standard cascaded diarization systems since a single system is trained at once to deal with the whole diarization problem. Several EEND variants and approaches are being proposed, however, all these models require large amounts of annotated data for training but available annotated data are scarce. Thus, EEND works have used mostly simulated mixtures for training. However, simulated mixtures do not resemble real conversations in many aspects. In this work we present an alternative method for creating synthetic conversations that resemble real ones by using statistics about distributions of pauses and overlaps estimated on genuine conversations. Furthermore, we analyze the effect of the source of the statistics, different augmentations and amounts of data. We demonstrate that our approach performs substantially better than the original one, while reducing the dependence on the fine-tuning stage. Experiments are carried out on 2-speaker telephone conversations of Callhome and DIHARD 3. Together with this publication, we release our implementations of EEND and the method for creating simulated conversations.

Related Papers

Efficient and Generalizable Speaker Diarization via Structured Pruning of Self-Supervised Models2025-06-23M3SD: Multi-modal, Multi-scenario and Multi-language Speaker Diarization Dataset2025-06-17Exploring Speaker Diarization with Mixture of Experts2025-06-17Seewo's Submission to MLC-SLM: Lessons learned from Speech Reasoning Language Models2025-06-16SC-SOT: Conditioning the Decoder on Diarized Speaker Information for End-to-End Overlapped Speech Recognition2025-06-15Diarization-Aware Multi-Speaker Automatic Speech Recognition via Large Language Models2025-06-06Improving Neural Diarization through Speaker Attribute Attractors and Local Dependency Modeling2025-06-05Speaker Diarization with Overlapping Community Detection Using Graph Attention Networks and Label Propagation Algorithm2025-06-03