TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/MeLFusion: Synthesizing Music from Image and Language Cues...

MeLFusion: Synthesizing Music from Image and Language Cues using Diffusion Models

Sanjoy Chowdhury, Sayan Nag, K J Joseph, Balaji Vasan Srinivasan, Dinesh Manocha

2024-06-07CVPR 2024 1Text-to-Music Generation
PaperPDFCode(official)

Abstract

Music is a universal language that can communicate emotions and feelings. It forms an essential part of the whole spectrum of creative media, ranging from movies to social media posts. Machine learning models that can synthesize music are predominantly conditioned on textual descriptions of it. Inspired by how musicians compose music not just from a movie script, but also through visualizations, we propose MeLFusion, a model that can effectively use cues from a textual description and the corresponding image to synthesize music. MeLFusion is a text-to-music diffusion model with a novel "visual synapse", which effectively infuses the semantics from the visual modality into the generated music. To facilitate research in this area, we introduce a new dataset MeLBench, and propose a new evaluation metric IMSM. Our exhaustive experimental evaluation suggests that adding visual information to the music synthesis pipeline significantly improves the quality of generated music, measured both objectively and subjectively, with a relative gain of up to 67.98% on the FAD score. We hope that our work will gather attention to this pragmatic, yet relatively under-explored research area.

Results

TaskDatasetMetricValueModel
Text-to-Music GenerationMusicCapsFAD1.12MeLFusion (image-conditioned)
Text-to-Music GenerationMusicCapsFD22.65MeLFusion (image-conditioned)
Text-to-Music GenerationMusicCapsKL_passt0.89MeLFusion (image-conditioned)

Related Papers

MuseControlLite: Multifunctional Music Generation with Lightweight Conditioners2025-06-23Diff-TONE: Timestep Optimization for iNstrument Editing in Text-to-Music Diffusion Models2025-06-18Auto-Regressive vs Flow-Matching: a Comparative Study of Modeling Paradigms for Text-to-Music Generation2025-06-10TokenSynth: A Token-based Neural Synthesizer for Instrument Cloning and Text-to-Instrument2025-02-13Diffusion based Text-to-Music Generation with Global and Local Text based Conditioning2025-01-24ETTA: Elucidating the Design Space of Text-to-Audio Models2024-12-26Long-Form Text-to-Music Generation with Adaptive Prompts: A Case Study in Tabletop Role-Playing Games Soundtracks2024-11-06MusicFlow: Cascaded Flow Matching for Text Guided Music Generation2024-10-27