TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LAST: Language Model Aware Speech Tokenization

LAST: Language Model Aware Speech Tokenization

Arnon Turetzky, Yossi Adi

2024-09-05Speech-to-TextQuantizationText to SpeechLanguage Modellingtext-to-speech
PaperPDF

Abstract

Speech tokenization serves as the foundation of speech language model (LM), enabling them to perform various tasks such as spoken language modeling, text-to-speech, speech-to-text, etc. Most speech tokenizers are trained independently of the LM training process, relying on separate acoustic models and quantization methods. Following such an approach may create a mismatch between the tokenization process and its usage afterward. In this study, we propose a novel approach to training a speech tokenizer by leveraging objectives from pre-trained textual LMs. We advocate for the integration of this objective into the process of learning discrete speech representations. Our aim is to transform features from a pre-trained speech model into a new feature space that enables better clustering for speech LMs. We empirically investigate the impact of various model design choices, including speech vocabulary size and text LM size. Our results demonstrate the proposed tokenization method outperforms the evaluated baselines considering both spoken language modeling and speech-to-text. More importantly, unlike prior work, the proposed method allows the utilization of a single pre-trained LM for processing both speech and text inputs, setting it apart from conventional tokenization approaches.

Results

TaskDatasetMetricValueModel
Language ModellingSALMonBackground (Domain) Consistency56LAST 1.3B
Language ModellingSALMonBackground (Random) Consistency61LAST 1.3B
Language ModellingSALMonBackground Alignment53LAST 1.3B
Language ModellingSALMonGender Consistency68.5LAST 1.3B
Language ModellingSALMonRoom Consistency62.5LAST 1.3B
Language ModellingSALMonSentiment Alignment53.5LAST 1.3B
Language ModellingSALMonSentiment Consistency65LAST 1.3B
Language ModellingSALMonSpeaker Consistency64.5LAST 1.3B
Language ModellingSALMonBackground (Domain) Consistency55.5LAST 350M
Language ModellingSALMonBackground (Random) Consistency60.5LAST 350M
Language ModellingSALMonBackground Alignment54.5LAST 350M
Language ModellingSALMonGender Consistency70.5LAST 350M
Language ModellingSALMonRoom Consistency61LAST 350M
Language ModellingSALMonSentiment Alignment51.5LAST 350M
Language ModellingSALMonSentiment Consistency64LAST 350M
Language ModellingSALMonSpeaker Consistency63LAST 350M

Related Papers

Efficient Deployment of Spiking Neural Networks on SpiNNaker2 for DVS Gesture Recognition Using Neuromorphic Intermediate Representation2025-09-04Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Hear Your Code Fail, Voice-Assisted Debugging for Python2025-07-20An End-to-End DNN Inference Framework for the SpiNNaker2 Neuromorphic MPSoC2025-07-18Task-Specific Audio Coding for Machines: Machine-Learned Latent Features Are Codes for That Machine2025-07-17Angle Estimation of a Single Source with Massive Uniform Circular Arrays2025-07-17NonverbalTTS: A Public English Corpus of Text-Aligned Nonverbal Vocalizations with Emotion Annotations for Text-to-Speech2025-07-17Making Language Model a Hierarchical Classifier and Generator2025-07-17