TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Fast Timing-Conditioned Latent Audio Diffusion

Fast Timing-Conditioned Latent Audio Diffusion

Zach Evans, CJ Carr, Josiah Taylor, Scott H. Hawley, Jordi Pons

2024-02-07Text-to-Music GenerationAudio Generation
PaperPDFCode(official)Code(official)

Abstract

Generating long-form 44.1kHz stereo audio from text prompts can be computationally demanding. Further, most previous works do not tackle that music and sound effects naturally vary in their duration. Our research focuses on the efficient generation of long-form, variable-length stereo music and sounds at 44.1kHz using text prompts with a generative model. Stable Audio is based on latent diffusion, with its latent defined by a fully-convolutional variational autoencoder. It is conditioned on text prompts as well as timing embeddings, allowing for fine control over both the content and length of the generated music and sounds. Stable Audio is capable of rendering stereo signals of up to 95 sec at 44.1kHz in 8 sec on an A100 GPU. Despite its compute efficiency and fast inference, it is one of the best in two public text-to-music and -audio benchmarks and, differently from state-of-the-art models, can generate music with structure and stereo sounds.

Results

TaskDatasetMetricValueModel
Audio GenerationAudioCapsCLAP_LAION0.41Stable Audio
Audio GenerationAudioCapsFD_openl3103.66Stable Audio
Audio GenerationAudioCapsKL_passt2.89Stable Audio
Text-to-Music GenerationMusicCapsFD_openl3108.69Stable Audio
Text-to-Music GenerationMusicCapsKL_passt0.8Stable Audio

Related Papers

FreeAudio: Training-Free Timing Planning for Controllable Long-Form Text-to-Audio Generation2025-07-11ThinkSound: Chain-of-Thought Reasoning in Multimodal Large Language Models for Audio Generation and Editing2025-06-26Step-by-Step Video-to-Audio Synthesis via Negative Audio Guidance2025-06-26Kling-Foley: Multimodal Diffusion Transformer for High-Quality Video-to-Audio Generation2025-06-24MuseControlLite: Multifunctional Music Generation with Lightweight Conditioners2025-06-23Diff-TONE: Timestep Optimization for iNstrument Editing in Text-to-Music Diffusion Models2025-06-18LiLAC: A Lightweight Latent ControlNet for Musical Audio Generation2025-06-13ViSAGe: Video-to-Spatial Audio Generation2025-06-13