TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Learning Tri-modal Embeddings for Zero-Shot Soundscape Map...

Learning Tri-modal Embeddings for Zero-Shot Soundscape Mapping

Subash Khanal, Srikumar Sastry, Aayush Dhakal, Nathan Jacobs

2023-09-19Cross-Modal Retrieval
PaperPDFCode(official)

Abstract

We focus on the task of soundscape mapping, which involves predicting the most probable sounds that could be perceived at a particular geographic location. We utilise recent state-of-the-art models to encode geotagged audio, a textual description of the audio, and an overhead image of its capture location using contrastive pre-training. The end result is a shared embedding space for the three modalities, which enables the construction of soundscape maps for any geographic region from textual or audio queries. Using the SoundingEarth dataset, we find that our approach significantly outperforms the existing SOTA, with an improvement of image-to-audio Recall@100 from 0.256 to 0.450. Our code is available at https://github.com/mvrl/geoclap.

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QuerySoundingEarthImage-to-sound R@1000.434GeoCLAP
Image Retrieval with Multi-Modal QuerySoundingEarthMedian Rank159GeoCLAP
Image Retrieval with Multi-Modal QuerySoundingEarthSound-to-image R@1000.434GeoCLAP
Cross-Modal Information RetrievalSoundingEarthImage-to-sound R@1000.434GeoCLAP
Cross-Modal Information RetrievalSoundingEarthMedian Rank159GeoCLAP
Cross-Modal Information RetrievalSoundingEarthSound-to-image R@1000.434GeoCLAP
Cross-Modal RetrievalSoundingEarthImage-to-sound R@1000.434GeoCLAP
Cross-Modal RetrievalSoundingEarthMedian Rank159GeoCLAP
Cross-Modal RetrievalSoundingEarthSound-to-image R@1000.434GeoCLAP

Related Papers

An analysis of vision-language models for fabric retrieval2025-07-07Mask-aware Text-to-Image Retrieval: Referring Expression Segmentation Meets Cross-modal Retrieval2025-06-28Maximal Matching Matters: Preventing Representation Collapse for Robust Cross-Modal Retrieval2025-06-26Multimodal Medical Image Binding via Shared Text Embeddings2025-06-22ContextRefine-CLIP for EPIC-KITCHENS-100 Multi-Instance Retrieval Challenge 20252025-06-12FedNano: Toward Lightweight Federated Tuning for Pretrained Multimodal Large Language Models2025-06-12SA-Person: Text-Based Person Retrieval with Scene-aware Re-ranking2025-05-30EmotionRankCLAP: Bridging Natural Language Speaking Styles and Ordinal Speech Emotion via Rank-N-Contrast2025-05-29