TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Vision Transformers are Parameter-Efficient Audio-Visual L...

Vision Transformers are Parameter-Efficient Audio-Visual Learners

Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, Gedas Bertasius

2022-12-15CVPR 2023 1AUDIO-VISUAL QUESTION ANSWERING (MUSIC-AVQA-v2.0)Audio-visual Question Answering
PaperPDFCode(official)

Abstract

Vision transformers (ViTs) have achieved impressive results on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAVISH adapter uses a small set of latent tokens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable parameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https://genjib.github.io/project_page/LAVISH/

Results

TaskDatasetMetricValueModel
Audio-visual Question AnsweringMUSIC-AVQAAcc77.08LAVISH
Audio-visual Question AnsweringMUSIC-AVQA v2.0Accuracy73.18LAVISH

Related Papers

Learning Sparsity for Effective and Efficient Music Performance Question Answering2025-06-02Music's Multimodal Complexity in AVQA: Why We Need More than General Multimodal LLMs2025-05-27FortisAVQA and MAVEN: a Benchmark Dataset and Debiasing Framework for Robust Multimodal Reasoning2025-04-01PAVE: Patching and Adapting Video Large Language Models2025-03-25Question-Aware Gaussian Experts for Audio-Visual Question Answering2025-03-06AVQACL: A Novel Benchmark for Audio-Visual Question Answering Continual Learning2025-01-01Patch-level Sounding Object Tracking for Audio-Visual Question Answering2024-12-14SaSR-Net: Source-Aware Semantic Representation Network for Enhancing Audio-Visual Question Answering2024-11-07