TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/ETTA: Elucidating the Design Space of Text-to-Audio Models

ETTA: Elucidating the Design Space of Text-to-Audio Models

Sang-gil Lee, Zhifeng Kong, Arushi Goel, Sungwon Kim, Rafael Valle, Bryan Catanzaro

2024-12-26Music GenerationText-to-Music GenerationAudio GenerationAudio captioningLanguage Modelling
PaperPDFCode

Abstract

Recent years have seen significant progress in Text-To-Audio (TTA) synthesis, enabling users to enrich their creative workflows with synthetic audio generated from natural language prompts. Despite this progress, the effects of data, model architecture, training objective functions, and sampling strategies on target benchmarks are not well understood. With the purpose of providing a holistic understanding of the design space of TTA models, we set up a large-scale empirical experiment focused on diffusion and flow matching models. Our contributions include: 1) AF-Synthetic, a large dataset of high quality synthetic captions obtained from an audio understanding model; 2) a systematic comparison of different architectural, training, and inference design choices for TTA models; 3) an analysis of sampling methods and their Pareto curves with respect to generation quality and inference speed. We leverage the knowledge obtained from this extensive analysis to propose our best model dubbed Elucidated Text-To-Audio (ETTA). When evaluated on AudioCaps and MusicCaps, ETTA provides improvements over the baselines trained on publicly available data, while being competitive with models trained on proprietary data. Finally, we show ETTA's improved ability to generate creative audio following complex and imaginative captions -- a task that is more challenging than current benchmarks.

Results

TaskDatasetMetricValueModel
Audio GenerationAudioCapsCLAP_LAION0.6ETTA-FT-AC-100k
Audio GenerationAudioCapsCLAP_MS0.43ETTA-FT-AC-100k
Audio GenerationAudioCapsFAD2.03ETTA-FT-AC-100k
Audio GenerationAudioCapsFD10.1ETTA-FT-AC-100k
Audio GenerationAudioCapsFD_openl361.79ETTA-FT-AC-100k
Audio GenerationAudioCapsIS14.29ETTA-FT-AC-100k
Audio GenerationAudioCapsKL_passt1.13ETTA-FT-AC-100k
Audio GenerationAudioCapsCLAP_LAION0.54ETTA
Audio GenerationAudioCapsCLAP_MS0.43ETTA
Audio GenerationAudioCapsFAD2.51ETTA
Audio GenerationAudioCapsFD13.12ETTA
Audio GenerationAudioCapsFD_openl380.13ETTA
Audio GenerationAudioCapsIS14.36ETTA
Audio GenerationAudioCapsKL_passt1.22ETTA
Text-to-Music GenerationMusicCapsCLAP_LAION0.51ETTA
Text-to-Music GenerationMusicCapsCLAP_MS0.53ETTA
Text-to-Music GenerationMusicCapsFAD1.91ETTA
Text-to-Music GenerationMusicCapsFD10.06ETTA
Text-to-Music GenerationMusicCapsFD_openl392.18ETTA
Text-to-Music GenerationMusicCapsIS3.32ETTA
Text-to-Music GenerationMusicCapsKL_passt0.84ETTA

Related Papers

Visual-Language Model Knowledge Distillation Method for Image Quality Assessment2025-07-21Making Language Model a Hierarchical Classifier and Generator2025-07-17VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning2025-07-17The Generative Energy Arena (GEA): Incorporating Energy Awareness in Large Language Model (LLM) Human Evaluations2025-07-17Inverse Reinforcement Learning Meets Large Language Model Post-Training: Basics, Advances, and Opportunities2025-07-17Assay2Mol: large language model-based drug design using BioAssay context2025-07-16Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16InstructFLIP: Exploring Unified Vision-Language Model for Face Anti-spoofing2025-07-16