TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/TransCG: A Large-Scale Real-World Dataset for Transparent ...

TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and a Grasping Baseline

Hongjie Fang, Hao-Shu Fang, Sheng Xu, Cewu Lu

2022-02-17Depth CompletionRobotic GraspingTransparent Object Depth EstimationTransparent objects
PaperPDFCode(official)

Abstract

Transparent objects are common in our daily life and frequently handled in the automated production line. Robust vision-based robotic grasping and manipulation for these objects would be beneficial for automation. However, the majority of current grasping algorithms would fail in this case since they heavily rely on the depth image, while ordinary depth sensors usually fail to produce accurate depth information for transparent objects owing to the reflection and refraction of light. In this work, we address this issue by contributing a large-scale real-world dataset for transparent object depth completion, which contains 57,715 RGB-D images from 130 different scenes. Our dataset is the first large-scale, real-world dataset that provides ground truth depth, surface normals, transparent masks in diverse and cluttered scenes. Cross-domain experiments show that our dataset is more general and can enable better generalization ability for models. Moreover, we propose an end-to-end depth completion network, which takes the RGB image and the inaccurate depth map as inputs and outputs a refined depth map. Experiments demonstrate superior efficacy, efficiency and robustness of our method over previous works, and it is able to process images of high resolutions under limited hardware resources. Real robot experiments show that our method can also be applied to novel transparent object grasping robustly. The full dataset and our method are publicly available at www.graspnet.net/transcg

Results

TaskDatasetMetricValueModel
Depth EstimationTransCGDelta < 1.2599.71DFNet
Depth EstimationTransCGMAE0.012DFNet
Depth EstimationTransCGREL0.027DFNet
Depth EstimationTransCGRMSE0.018DFNet
Depth EstimationTransCGdelta < 1.0583.76DFNet
Depth EstimationTransCGdelta < 1.1095.67DFNet
3DTransCGDelta < 1.2599.71DFNet
3DTransCGMAE0.012DFNet
3DTransCGREL0.027DFNet
3DTransCGRMSE0.018DFNet
3DTransCGdelta < 1.0583.76DFNet
3DTransCGdelta < 1.1095.67DFNet
3D Depth EstimationTransCGDelta < 1.2599.71DFNet
3D Depth EstimationTransCGMAE0.012DFNet
3D Depth EstimationTransCGREL0.027DFNet
3D Depth EstimationTransCGRMSE0.018DFNet
3D Depth EstimationTransCGdelta < 1.0583.76DFNet
3D Depth EstimationTransCGdelta < 1.1095.67DFNet

Related Papers

TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update2025-07-15MTF-Grasp: A Multi-tier Federated Learning Approach for Robotic Grasping2025-07-14PacGDC: Label-Efficient Generalizable Depth Completion with Projection Ambiguity and Consistency2025-07-10DidSee: Diffusion-Based Depth Completion for Material-Agnostic Robotic Perception and Manipulation2025-06-26Consensus-Driven Uncertainty for Robotic Grasping based on RGB Perception2025-06-24Monocular One-Shot Metric-Depth Alignment for RGB-Based Robot Grasping2025-06-20JENGA: Object selection and pose estimation for robotic grasping from a stack2025-06-16DCIRNet: Depth Completion with Iterative Refinement for Dexterous Grasping of Transparent and Reflective Objects2025-06-11