DyGait: Exploiting Dynamic Representations for High-performance Gait Recognition
Date
2023
Authors
Wang, M.
Guo, X.
Lin, B.
Yang, T.
Zhu, Z.
Li, L.
Zhang, S.
Yu, X.
Editors
Advisors
Journal Title
Journal ISSN
Volume Title
Type:
Conference paper
Citation
Proceedings / IEEE International Conference on Computer Vision. IEEE International Conference on Computer Vision, 2023, pp.13378-13387
Statement of Responsibility
Ming Wang, Xianda Guo, Beibei Lin, Tian Yang, Zheng Zhu, Lincheng Li, Shunli Zhang, Xin Yu
Conference Name
IEEE/CVF International Conference on Computer Vision (ICCV) (1 Oct 2023 - 6 Oct 2023 : Paris, France)
Abstract
Gait recognition is a biometric technology that recognizes the identity of humans through their walking patterns. Compared with other biometric technologies, gait recognition is more difficult to disguise and can be applied to the condition of long-distance without the cooperation of subjects. Thus, it has unique potential and wide application for crime prevention and social security. At present, most gait recognition methods directly extract features from the video frames to establish representations. However, these architectures learn representations from different features equally but do not pay enough attention to dynamic features, which refers to a representation of dynamic parts of silhouettes over time (e.g. legs). Since dynamic parts of the human body are more informative than other parts (e.g. bags) during walking, in this paper, we propose a novel and high-performance framework named DyGait. This is the first framework on gait recognition that is designed to focus on the extraction of dynamic features. Specifically, to take full advantage of the dynamic information, we propose a Dynamic Augmentation Module (DAM), which can automatically establish spatial-temporal feature representations of the dynamic parts of the human body. The experimental results show that our DyGait network outperforms other state-of-the-art gait recognition methods. It achieves an average Rank-1 accuracy of 71.4% on the GREW dataset, 66.3% on the Gait3D dataset, 98.4% on the CAS1A-B dataset and 98.3% on the OU-MVLP dataset.
School/Discipline
Dissertation Note
Provenance
Description
Access Status
Rights
© 2023 IEEE.