TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Papers/GaitSet: Regarding Gait as a Set for Cross-View Gait Recog...

GaitSet: Regarding Gait as a Set for Cross-View Gait Recognition

Hanqing Chao, Yiwei He, Junping Zhang, Jianfeng Feng

2018-11-15Multiview Gait RecognitionGait Recognition
PaperPDFCodeCode(official)Code

Abstract

As a unique biometric feature that can be recognized at a distance, gait has broad applications in crime prevention, forensic identification and social security. To portray a gait, existing gait recognition methods utilize either a gait template, where temporal information is hard to preserve, or a gait sequence, which must keep unnecessary sequential constraints and thus loses the flexibility of gait recognition. In this paper we present a novel perspective, where a gait is regarded as a set consisting of independent frames. We propose a new network named GaitSet to learn identity information from the set. Based on the set perspective, our method is immune to permutation of frames, and can naturally integrate frames from different videos which have been filmed under different scenarios, such as diverse viewing angles, different clothes/carrying conditions. Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 95.0% on the CASIA-B gait dataset and an 87.1% accuracy on the OU-MVLP gait dataset. These results represent new state-of-the-art recognition accuracy. On various complex scenarios, our model exhibits a significant level of robustness. It achieves accuracies of 87.2% and 70.4% on CASIA-B under bag-carrying and coat-wearing walking conditions, respectively. These outperform the existing best methods by a large margin. The method presented can also achieve a satisfactory accuracy with a small number of frames in a test sample, e.g., 82.5% on CASIA-B with only 7 frames. The source code has been released at https://github.com/AbnerHqC/GaitSet.

Results

TaskDatasetMetricValueModel
Gait RecognitionOUMVLPAveraged rank-1 acc(%)87.1GaitSet
Gait RecognitionOU-MVLPAccuracy (Cross-View)87.1GaitSet
Gait RecognitionCASIA-BAccuracy (Cross-View, Avg)84.2GaitSet
Gait RecognitionCASIA-BBG#1-287.2GaitSet
Gait RecognitionCASIA-BCL#1-270.4GaitSet
Gait RecognitionCASIA-BNM#5-6 95GaitSet

Related Papers

Mind the Gap: Bridging Occlusion in Gait Recognition via Residual Gap Correction2025-07-15On Denoising Walking Videos for Gait Recognition2025-05-24ExoGait-MS: Learning Periodic Dynamics with Multi-Scale Graph Network for Exoskeleton Gait Recognition2025-05-23BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models2025-05-23Exploring Generalized Gait Recognition: Reducing Redundancy and Noise within Indoor and Outdoor Datasets2025-05-21OptiGait-LGBM: An Efficient Approach of Gait-based Person Re-identification in Non-Overlapping Regions2025-05-10Database-Agnostic Gait Enrollment using SetTransformers2025-05-05CVVNet: A Cross-Vertical-View Network for Gait Recognition2025-05-03