TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Consistency of Implicit and Explicit Features Matters for ...

Consistency of Implicit and Explicit Features Matters for Monocular 3D Object Detection

Qian Ye, Ling Jiang, Wang Zhen, Yuyang Du

2022-07-16Monocular 3D Object DetectionAutonomous Drivingobject-detection3D Object DetectionObject Detection
PaperPDF

Abstract

Low-cost autonomous agents including autonomous driving vehicles chiefly adopt monocular 3D object detection to perceive surrounding environment. This paper studies 3D intermediate representation methods which generate intermediate 3D features for subsequent tasks. For example, the 3D features can be taken as input for not only detection, but also end-to-end prediction and/or planning that require a bird's-eye-view feature representation. In the study, we found that in generating 3D representation previous methods do not maintain the consistency between the objects' implicit poses in the latent space, especially orientations, and the explicitly observed poses in the Euclidean space, which can substantially hurt model performance. To tackle this problem, we present a novel monocular detection method, the first one being aware of the poses to purposefully guarantee that they are consistent between the implicit and explicit features. Additionally, we introduce a local ray attention mechanism to efficiently transform image features to voxels at accurate 3D locations. Thirdly, we propose a handcrafted Gaussian positional encoding function, which outperforms the sinusoidal encoding function while retaining the benefit of being continuous. Results show that our method improves the state-of-the-art 3D intermediate representation method by 3.15%. We are ranked 1st among all the reported monocular methods on both 3D and BEV detection benchmark on KITTI leaderboard as of th result's submission time.

Results

TaskDatasetMetricValueModel
Object DetectionKITTI Cars EasyAP Easy31.55CIE
Object DetectionKITTI Cars ModerateAP Medium20.95CIE
Object DetectionKITTI Cars HardAP Hard17.83CIE
3DKITTI Cars EasyAP Easy31.55CIE
3DKITTI Cars ModerateAP Medium20.95CIE
3DKITTI Cars HardAP Hard17.83CIE
3D Object DetectionKITTI Cars EasyAP Easy31.55CIE
3D Object DetectionKITTI Cars ModerateAP Medium20.95CIE
3D Object DetectionKITTI Cars HardAP Hard17.83CIE
2D ClassificationKITTI Cars EasyAP Easy31.55CIE
2D ClassificationKITTI Cars ModerateAP Medium20.95CIE
2D ClassificationKITTI Cars HardAP Hard17.83CIE
2D Object DetectionKITTI Cars EasyAP Easy31.55CIE
2D Object DetectionKITTI Cars ModerateAP Medium20.95CIE
2D Object DetectionKITTI Cars HardAP Hard17.83CIE
16kKITTI Cars EasyAP Easy31.55CIE
16kKITTI Cars ModerateAP Medium20.95CIE
16kKITTI Cars HardAP Hard17.83CIE

Related Papers

GEMINUS: Dual-aware Global and Scene-Adaptive Mixture-of-Experts for End-to-End Autonomous Driving2025-07-19AGENTS-LLM: Augmentative GENeration of Challenging Traffic Scenarios with an Agentic LLM Framework2025-07-18World Model-Based End-to-End Scene Generation for Accident Anticipation in Autonomous Driving2025-07-17Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models2025-07-17Channel-wise Motion Features for Efficient Motion Segmentation2025-07-17LaViPlan : Language-Guided Visual Path Planning with RLVR2025-07-17A Real-Time System for Egocentric Hand-Object Interaction Detection in Industrial Domains2025-07-17RS-TinyNet: Stage-wise Feature Fusion Network for Detecting Tiny Objects in Remote Sensing Images2025-07-17