TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Towards Learning a Generalist Model for Embodied Navigation

Towards Learning a Generalist Model for Embodied Navigation

Duo Zheng, Shijia Huang, Lin Zhao, Yiwu Zhong, LiWei Wang

2023-12-04CVPR 2024 1Question AnsweringVisual NavigationNavigate3D Question Answering (3D-QA)Embodied Question Answering
PaperPDFCode(official)Code(official)

Abstract

Building a generalist agent that can interact with the world is the intriguing target of AI systems, thus spurring the research for embodied navigation, where an agent is required to navigate according to instructions or respond to queries. Despite the major progress attained, previous works primarily focus on task-specific agents and lack generalizability to unseen scenarios. Recently, LLMs have presented remarkable capabilities across various fields, and provided a promising opportunity for embodied navigation. Drawing on this, we propose the first generalist model for embodied navigation, NaviLLM. It adapts LLMs to embodied navigation by introducing schema-based instruction. The schema-based instruction flexibly casts various tasks into generation problems, thereby unifying a wide range of tasks. This approach allows us to integrate diverse data sources from various datasets into the training, equipping NaviLLM with a wide range of capabilities required by embodied navigation. We conduct extensive experiments to evaluate the performance and generalizability of our model. The experimental results demonstrate that our unified model achieves state-of-the-art performance on CVDN, SOON, and ScanQA. Specifically, it surpasses the previous stats-of-the-art method by a significant margin of 29% in goal progress on CVDN. Moreover, our model also demonstrates strong generalizability and presents impressive results on unseen tasks, e.g., embodied question answering and 3D captioning.

Results

TaskDatasetMetricValueModel
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-139.73NaviLLM
Visual Question Answering (VQA)ScanQA Test w/ objectsBLEU-413.9NaviLLM
Visual Question Answering (VQA)ScanQA Test w/ objectsCIDEr80.77NaviLLM
Visual Question Answering (VQA)ScanQA Test w/ objectsExact Match26.27NaviLLM
Visual Question Answering (VQA)ScanQA Test w/ objectsMETEOR16.56NaviLLM
Visual Question Answering (VQA)ScanQA Test w/ objectsROUGE40.23NaviLLM
Visual NavigationCooperative Vision-and-Dialogue Navigationdist_to_end_reduction7.9NaviLLM
Visual NavigationCooperative Vision-and-Dialogue Navigationspl0.09NaviLLM
Visual NavigationR2Rspl0.6NaviLLM
Visual NavigationSOON TestNav-SPL26.26NaviLLM
Visual NavigationSOON TestSR35.04NaviLLM

Related Papers

From Roots to Rewards: Dynamic Tree Reasoning with RL2025-07-17Enter the Mind Palace: Reasoning and Planning for Long-term Active Embodied Question Answering2025-07-17Vision-and-Language Training Helps Deploy Taxonomic Knowledge but Does Not Fundamentally Alter It2025-07-17City-VLM: Towards Multidomain Perception Scene Understanding via Multimodal Incomplete Learning2025-07-17Describe Anything Model for Visual Question Answering on Text-rich Images2025-07-16Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility2025-07-16Vision-based Perception for Autonomous Vehicles in Obstacle Avoidance Scenarios2025-07-16CogDDN: A Cognitive Demand-Driven Navigation with Decision Optimization and Dual-Process Thinking2025-07-15