Description
Self-supervised Equivariant Attention Mechanism, or SEAM, is an attention mechanism for weakly supervised semantic segmentation. The SEAM applies consistency regularization on CAMs from various transformed images to provide self-supervision for network learning. To further improve the network prediction consistency, SEAM introduces the pixel correlation module (PCM), which captures context appearance information for each pixel and revises original CAMs by learned affinity attention maps. The SEAM is implemented by a siamese network with equivariant cross regularization (ECR) loss, which regularizes the original CAMs and the revised CAMs on different branches.
Papers Using This Method
Self-Destructive Language Model2025-05-18SEAM: A Stochastic Benchmark for Multi-Document Tasks2024-06-23It Takes Two: On the Seamlessness between Reward and Policy Model in RLHF2024-06-12CVFC: Attention-Based Cross-View Feature Consistency for Weakly Supervised Semantic Segmentation of Pathology Images2023-08-21Joint Microseismic Event Detection and Location with a Detection Transformer2023-07-16High-fidelity Pseudo-labels for Boosting Weakly-Supervised Segmentation2023-04-05ContrasInver: Ultra-Sparse Label Semi-supervised Regression for Multi-dimensional Seismic Inversion2023-02-13Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models2022-12-09Out-of-Candidate Rectification for Weakly Supervised Semantic Segmentation2022-11-22CONSS: Contrastive Learning Approach for Semi-Supervised Seismic Facies Classification2022-10-10YOLO-FaceV2: A Scale and Occlusion Aware Face Detector2022-08-03Semi-supervised Impedance Inversion by Bayesian Neural Network Based on 2-d CNN Pre-training2021-11-20MovingFashion: a Benchmark for the Video-to-Shop Challenge2021-10-06Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation2020-04-09