TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/Adaptive Graph Representation Learning for Video Person Re...

Adaptive Graph Representation Learning for Video Person Re-identification

Yiming Wu, Omar El Farouk Bourahla, Xi Li, Fei Wu, Qi Tian, Xue Zhou

2019-09-05Video-Based Person Re-IdentificationGraph Representation LearningRepresentation LearningPerson Re-Identification
PaperPDFCode(official)

Abstract

Recent years have witnessed the remarkable progress of applying deep learning models in video person re-identification (Re-ID). A key factor for video person Re-ID is to effectively construct discriminative and robust video feature representations for many complicated situations. Part-based approaches employ spatial and temporal attention to extract representative local features. While correlations between parts are ignored in the previous methods, to leverage the relations of different parts, we propose an innovative adaptive graph representation learning scheme for video person Re-ID, which enables the contextual interactions between relevant regional features. Specifically, we exploit the pose alignment connection and the feature affinity connection to construct an adaptive structure-aware adjacency graph, which models the intrinsic relations between graph nodes. We perform feature propagation on the adjacency graph to refine regional features iteratively, and the neighbor nodes' information is taken into account for part feature representation. To learn compact and discriminative representations, we further propose a novel temporal resolution-aware regularization, which enforces the consistency among different temporal resolutions for the same identities. We conduct extensive evaluations on four benchmarks, i.e. iLIDS-VID, PRID2011, MARS, and DukeMTMC-VideoReID, experimental results achieve the competitive performance which demonstrates the effectiveness of our proposed method. The code is available at https://github.com/weleen/AGRL.pytorch.

Results

TaskDatasetMetricValueModel
Person Re-IdentificationiLIDS-VIDRank-183.7AGRL
Person Re-IdentificationiLIDS-VIDRank-1095.4AGRL
Person Re-IdentificationiLIDS-VIDRank-2099.5AGRL
Person Re-IdentificationPRID2011Rank-193.1AGRL
Person Re-IdentificationPRID2011Rank-1098.7AGRL
Person Re-IdentificationPRID2011Rank-2099.8AGRL
Person Re-IdentificationMARSRank-189.8AGRL
Person Re-IdentificationMARSRank-1096.1AGRL
Person Re-IdentificationMARSRank-2097.6AGRL
Person Re-IdentificationMARSmAP81.1AGRL

Related Papers

Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper2025-07-20SMART: Relation-Aware Learning of Geometric Representations for Knowledge Graphs2025-07-17Spectral Bellman Method: Unifying Representation and Exploration in RL2025-07-17Boosting Team Modeling through Tempo-Relational Representation Learning2025-07-17Weakly Supervised Visible-Infrared Person Re-Identification via Heterogeneous Expert Collaborative Consistency Learning2025-07-17WhoFi: Deep Person Re-Identification via Wi-Fi Channel Signal Encoding2025-07-17Similarity-Guided Diffusion for Contrastive Sequential Recommendation2025-07-16Are encoders able to learn landmarkers for warm-starting of Hyperparameter Optimization?2025-07-16