TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Stacked Hybrid-Attention and Group Collaborative Learning ...

Stacked Hybrid-Attention and Group Collaborative Learning for Unbiased Scene Graph Generation

Xingning Dong, Tian Gan, Xuemeng Song, Jianlong Wu, Yuan Cheng, Liqiang Nie

2022-03-18CVPR 2022 1Scene Graph GenerationGraph GenerationUnbiased Scene Graph Generation
PaperPDFCode(official)

Abstract

Scene Graph Generation, which generally follows a regular encoder-decoder pipeline, aims to first encode the visual contents within the given image and then parse them into a compact summary graph. Existing SGG approaches generally not only neglect the insufficient modality fusion between vision and language, but also fail to provide informative predicates due to the biased relationship predictions, leading SGG far from practical. Towards this end, in this paper, we first present a novel Stacked Hybrid-Attention network, which facilitates the intra-modal refinement as well as the inter-modal interaction, to serve as the encoder. We then devise an innovative Group Collaborative Learning strategy to optimize the decoder. Particularly, based upon the observation that the recognition capability of one classifier is limited towards an extremely unbalanced dataset, we first deploy a group of classifiers that are expert in distinguishing different subsets of classes, and then cooperatively optimize them from two aspects to promote the unbiased SGG. Experiments conducted on VG and GQA datasets demonstrate that, we not only establish a new state-of-the-art in the unbiased metric, but also nearly double the performance compared with two baselines.

Results

TaskDatasetMetricValueModel
Scene ParsingVisual GenomemR@2035.6SHA-GCL (MOTIFS-ResNeXt-101-FPN backbone; PredCls mode)
2D Semantic SegmentationVisual GenomemR@2035.6SHA-GCL (MOTIFS-ResNeXt-101-FPN backbone; PredCls mode)
Scene Graph GenerationVisual GenomemR@2035.6SHA-GCL (MOTIFS-ResNeXt-101-FPN backbone; PredCls mode)

Related Papers

NGTM: Substructure-based Neural Graph Topic Model for Interpretable Graph Generation2025-07-17GNN-CNN: An Efficient Hybrid Model of Convolutional and Graph Neural Networks for Text Representation2025-07-10SPADE: Spatial-Aware Denoising Network for Open-vocabulary Panoptic Scene Graph Generation with Long- and Local-range Context Reasoning2025-07-08GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning2025-07-04CoPa-SG: Dense Scene Graphs with Parametric and Proto-Relations2025-06-26CAT-SG: A Large Dynamic Scene Graph Dataset for Fine-Grained Understanding of Cataract Surgery2025-06-26HOIverse: A Synthetic Scene Graph Dataset With Human Object Interactions2025-06-24DiscoSG: Towards Discourse-Level Text Scene Graph Parsing through Iterative Graph Refinement2025-06-18