TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/QPIC: Query-Based Pairwise Human-Object Interaction Detect...

QPIC: Query-Based Pairwise Human-Object Interaction Detection with Image-Wide Contextual Information

Masato Tamura, Hiroki Ohashi, Tomoaki Yoshinaga

2021-03-09CVPR 2021 1Human-Object Interaction DetectionHuman-Object Interaction Concept Discovery
PaperPDFCode(official)

Abstract

We propose a simple, intuitive yet powerful method for human-object interaction (HOI) detection. HOIs are so diverse in spatial distribution in an image that existing CNN-based methods face the following three major drawbacks; they cannot leverage image-wide features due to CNN's locality, they rely on a manually defined location-of-interest for the feature aggregation, which sometimes does not cover contextually important regions, and they cannot help but mix up the features for multiple HOI instances if they are located closely. To overcome these drawbacks, we propose a transformer-based feature extractor, in which an attention mechanism and query-based detection play key roles. The attention mechanism is effective in aggregating contextually important information image-wide, while the queries, which we design in such a way that each query captures at most one human-object pair, can avoid mixing up the features from multiple instances. This transformer-based feature extractor produces so effective embeddings that the subsequent detection heads may be fairly simple and intuitive. The extensive analysis reveals that the proposed method successfully extracts contextually important features, and thus outperforms existing methods by large margins (5.37 mAP on HICO-DET, and 5.7 mAP on V-COCO). The source codes are available at $\href{https://github.com/hitachi-rd-cv/qpic}{\text{this https URL}}$.

Results

TaskDatasetMetricValueModel
Human-Object Interaction DetectionV-COCOAP(S1)58.8QPIC (ResNet50)
Human-Object Interaction DetectionV-COCOAP(S2)61QPIC (ResNet50)
Human-Object Interaction DetectionV-COCOTime Per Frame(ms)46QPIC (ResNet50)
Human-Object Interaction DetectionV-COCOAP(S1)58.3QPIC (ResNet101)
Human-Object Interaction DetectionV-COCOAP(S2)60.7QPIC (ResNet101)
Human-Object Interaction DetectionV-COCOTime Per Frame(ms)63QPIC (ResNet101)
Human-Object Interaction DetectionHICO-DETTime Per Frame (ms)63QPIC (ResNet101)
Human-Object Interaction DetectionHICO-DETmAP29.9QPIC (ResNet101)
Human-Object Interaction DetectionHICO-DETTime Per Frame (ms)46QPIC (ResNet50)
Human-Object Interaction DetectionHICO-DETmAP29.07QPIC (ResNet50)
Human-Object Interaction Concept DiscoveryHICO-DETUnknown (AP)27.42Qpic

Related Papers

RoHOI: Robustness Benchmark for Human-Object Interaction Detection2025-07-12Bilateral Collaboration with Large Vision-Language Models for Open Vocabulary Human-Object Interaction Detection2025-07-09VolumetricSMPL: A Neural Volumetric Body Model for Efficient Interactions, Contacts, and Collisions2025-06-29HOIverse: A Synthetic Scene Graph Dataset With Human Object Interactions2025-06-24On the Robustness of Human-Object Interaction Detection against Distribution Shift2025-06-22Egocentric Human-Object Interaction Detection: A New Benchmark and Method2025-06-17InterActHuman: Multi-Concept Human Animation with Layout-Aligned Audio Conditions2025-06-11HunyuanVideo-HOMA: Generic Human-Object Interaction in Multimodal Driven Human Animation2025-06-10