TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Self-supervised Audiovisual Representation Learning for Re...

Self-supervised Audiovisual Representation Learning for Remote Sensing Data

Konrad Heidler, Lichao Mou, Di Hu, Pu Jin, Guangyao Li, Chuang Gan, Ji-Rong Wen, Xiao Xiang Zhu

2021-08-02Cross-Modal RetrievalRepresentation LearningTransfer Learning
PaperPDFCode(official)

Abstract

Many current deep learning approaches make extensive use of backbone networks pre-trained on large datasets like ImageNet, which are then fine-tuned to perform a certain task. In remote sensing, the lack of comparable large annotated datasets and the wide diversity of sensing platforms impedes similar developments. In order to contribute towards the availability of pre-trained backbone networks in remote sensing, we devise a self-supervised approach for pre-training deep neural networks. By exploiting the correspondence between geo-tagged audio recordings and remote sensing imagery, this is done in a completely label-free manner, eliminating the need for laborious manual annotation. For this purpose, we introduce the SoundingEarth dataset, which consists of co-located aerial imagery and audio samples all around the world. Using this dataset, we then pre-train ResNet models to map samples from both modalities into a common embedding space, which encourages the models to understand key properties of a scene that influence both visual and auditory appearance. To validate the usefulness of the proposed approach, we evaluate the transfer learning performance of pre-trained weights obtained against weights obtained through other means. By fine-tuning the models on a number of commonly used remote sensing datasets, we show that our approach outperforms existing pre-training strategies for remote sensing imagery. The dataset, code and pre-trained model weights will be available at https://github.com/khdlr/SoundingEarth.

Results

TaskDatasetMetricValueModel
Image Retrieval with Multi-Modal QuerySoundingEarthImage-to-sound R@1000.291ResNet-18
Image Retrieval with Multi-Modal QuerySoundingEarthMedian Rank565ResNet-18
Image Retrieval with Multi-Modal QuerySoundingEarthSound-to-image R@1000.25ResNet-18
Cross-Modal Information RetrievalSoundingEarthImage-to-sound R@1000.291ResNet-18
Cross-Modal Information RetrievalSoundingEarthMedian Rank565ResNet-18
Cross-Modal Information RetrievalSoundingEarthSound-to-image R@1000.25ResNet-18
Cross-Modal RetrievalSoundingEarthImage-to-sound R@1000.291ResNet-18
Cross-Modal RetrievalSoundingEarthMedian Rank565ResNet-18
Cross-Modal RetrievalSoundingEarthSound-to-image R@1000.25ResNet-18

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20RaMen: Multi-Strategy Multi-Modal Learning for Bundle Construction2025-07-18Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Disentangling coincident cell events using deep transfer learning and compressive sensing2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16Language-Guided Contrastive Audio-Visual Masked Autoencoder with Automatically Generated Audio-Visual-Text Triplets from Videos2025-07-16