TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Analyzing Generalization of Vision and Language Navigation...

Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas

Raphael Schumann, Stefan Riezler

2022-03-25ACL 2022 5Vision and Language Navigation
PaperPDFCode(official)

Abstract

Vision and language navigation (VLN) is a challenging visually-grounded language understanding task. Given a natural language navigation instruction, a visual agent interacts with a graph-based environment equipped with panorama images and tries to follow the described route. Most prior work has been conducted in indoor scenarios where best results were obtained for navigation on routes that are similar to the training routes, with sharp drops in performance when testing on unseen environments. We focus on VLN in outdoor scenarios and find that in contrast to indoor VLN, most of the gain in outdoor VLN on unseen data is due to features like junction type embedding or heading delta that are specific to the respective environment graph, while image information plays a very minor role in generalizing VLN to unseen outdoor areas. These findings show a bias to specifics of graph representations of urban environments, demanding that VLN tasks grow in scale and diversity of geographical environments.

Results

TaskDatasetMetricValueModel
Vision and Language NavigationTouchdown DatasetTask Completion (TC)29.1ORAR + junction type + heading delta
Vision and Language NavigationTouchdown DatasetTask Completion (TC)24.2ORAR
Vision and Language Navigationmap2seqTask Completion (TC)46.7ORAR + junction type + heading delta
Vision and Language Navigationmap2seqTask Completion (TC)45.1ORAR
Vision and Language Navigationmap2seqTask Completion (TC)17Gated Attention
Vision and Language Navigationmap2seqTask Completion (TC)14.7Rconcat

Related Papers

Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities2025-07-17NavMorph: A Self-Evolving World Model for Vision-and-Language Navigation in Continuous Environments2025-06-30Grounded Vision-Language Navigation for UAVs with Open-Vocabulary Goal Understanding2025-06-12A Navigation Framework Utilizing Vision-Language Models2025-06-11Disrupting Vision-Language Model-Driven Navigation Services via Adversarial Object Fusion2025-05-29Cross from Left to Right Brain: Adaptive Text Dreamer for Vision-and-Language Navigation2025-05-27FlightGPT: Towards Generalizable and Interpretable UAV Vision-and-Language Navigation with Vision-Language Models2025-05-19Dynam3D: Dynamic Layered 3D Tokens Empower VLM for Vision-and-Language Navigation2025-05-16