TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/LaRa: Latents and Rays for Multi-Camera Bird's-Eye-View Se...

LaRa: Latents and Rays for Multi-Camera Bird's-Eye-View Semantic Segmentation

Florent Bartoccioni, Éloi Zablocki, Andrei Bursuc, Patrick Pérez, Matthieu Cord, Karteek Alahari

2022-06-27Autonomous DrivingSemantic SegmentationBird's-Eye View Semantic SegmentationDepth EstimationMonocular Depth Estimation
PaperPDFCode(official)

Abstract

Recent works in autonomous driving have widely adopted the bird's-eye-view (BEV) semantic map as an intermediate representation of the world. Online prediction of these BEV maps involves non-trivial operations such as multi-camera data extraction as well as fusion and projection into a common topview grid. This is usually done with error-prone geometric operations (e.g., homography or back-projection from monocular depth estimation) or expensive direct dense mapping between image pixels and pixels in BEV (e.g., with MLP or attention). In this work, we present 'LaRa', an efficient encoder-decoder, transformer-based model for vehicle semantic segmentation from multiple cameras. Our approach uses a system of cross-attention to aggregate information over multiple sensors into a compact, yet rich, collection of latent representations. These latent representations, after being processed by a series of self-attention blocks, are then reprojected with a second cross-attention in the BEV space. We demonstrate that our model outperforms the best previous works using transformers on nuScenes. The code and trained models are available at https://github.com/valeoai/LaRa

Results

TaskDatasetMetricValueModel
Semantic SegmentationnuScenesIoU veh - 224x480 - No vis filter - 100x100 at 0.535.4LaRa
Semantic SegmentationnuScenesIoU veh - 224x480 - Vis filter. - 100x100 at 0.538.9LaRa
10-shot image generationnuScenesIoU veh - 224x480 - No vis filter - 100x100 at 0.535.4LaRa
10-shot image generationnuScenesIoU veh - 224x480 - Vis filter. - 100x100 at 0.538.9LaRa
Bird's-Eye View Semantic SegmentationnuScenesIoU veh - 224x480 - No vis filter - 100x100 at 0.535.4LaRa
Bird's-Eye View Semantic SegmentationnuScenesIoU veh - 224x480 - Vis filter. - 100x100 at 0.538.9LaRa

Related Papers

SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction2025-07-21GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model2025-07-17