TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Natural Image Matting via Guided Contextual Attention

Natural Image Matting via Guided Contextual Attention

Yaoyi Li, Hongtao Lu

2020-01-13Image MattingTransparent objects
PaperPDFCode(official)

Abstract

Over the last few years, deep learning based approaches have achieved outstanding improvements in natural image matting. Many of these methods can generate visually plausible alpha estimations, but typically yield blurry structures or textures in the semitransparent area. This is due to the local ambiguity of transparent objects. One possible solution is to leverage the far-surrounding information to estimate the local opacity. Traditional affinity-based methods often suffer from the high computational complexity, which are not suitable for high resolution alpha estimation. Inspired by affinity-based method and the successes of contextual attention in inpainting, we develop a novel end-to-end approach for natural image matting with a guided contextual attention module, which is specifically designed for image matting. Guided contextual attention module directly propagates high-level opacity information globally based on the learned low-level affinity. The proposed method can mimic information flow of affinity-based methods and utilize rich features learned by deep neural networks simultaneously. Experiment results on Composition-1k testing set and alphamatting.com benchmark dataset demonstrate that our method outperforms state-of-the-art approaches in natural image matting. Code and models are available at https://github.com/Yaoyi-Li/GCA-Matting.

Results

TaskDatasetMetricValueModel
Image MattingSemantic Image Matting DatasetConn36.03GCA
Image MattingSemantic Image Matting DatasetGrad28.7GCA
Image MattingSemantic Image Matting DatasetMSE(10^3)11GCA
Image MattingSemantic Image Matting DatasetSAD39.28GCA

Related Papers

TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update2025-07-15Monocular One-Shot Metric-Depth Alignment for RGB-Based Robot Grasping2025-06-20Post-Training Quantization for Video Matting2025-06-12Multi-Label Stereo Matching for Transparent Scene Depth Estimation2025-05-20Eye2Eye: A Simple Approach for Monocular-to-Stereo Video Synthesis2025-04-30TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians2025-04-26MP-Mat: A 3D-and-Instance-Aware Human Matting and Editing Framework with Multiplane Representation2025-04-20TSGS: Improving Gaussian Splatting for Transparent Surface Reconstruction via Normal and De-lighting Priors2025-04-17