TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Music Transformer

Music Transformer

Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Ian Simon, Curtis Hawthorne, Andrew M. Dai, Matthew D. Hoffman, Monica Dinculescu, Douglas Eck

2018-09-12ICLR 2019 5Music ModelingMusic Generation
PaperPDFCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCodeCode

Abstract

Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length. We propose an algorithm that reduces their intermediate memory requirement to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long compositions (thousands of steps, four times the length modeled in Oore et al., 2018) with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-Competition, and obtain state-of-the-art results on the latter.

Results

TaskDatasetMetricValueModel
Music ModelingJSB ChoralesNLL0.335Music Transformer

Related Papers

WildFX: A DAW-Powered Pipeline for In-the-Wild Audio FX Graph Modeling2025-07-14MusiScene: Leveraging MU-LLaMA for Scene Imagination and Enhanced Video Background Music Generation2025-07-08TOMI: Transforming and Organizing Music Ideas for Multi-Track Compositions with Full-Song Structure2025-06-29Exploring Adapter Design Tradeoffs for Low Resource Music Generation2025-06-26Let Your Video Listen to Your Music!2025-06-23MuseControlLite: Multifunctional Music Generation with Lightweight Conditioners2025-06-23Benchmarking Music Generation Models and Metrics via Human Preference Studies2025-06-23AI-Generated Song Detection via Lyrics Transcripts2025-06-23