Multi-View Label Prediction for Unsupervsied Person Re-identification

 

Authors

Qingze Yin1, Guanan Wang2, Guodong Ding1, Shaogang Gong3, and Zhenming Tang1
1Nanjing Univerisity of Science and Technology, 2Chinese Academy of Sciences, 3Queen Mary University of London

Abstract

Person re-identification (ReID) aims to match pedes- trian images across disjoint cameras. Existing supervised ReID methods utilize deep networks and train them with identity- labeled images, which suffer from limited annotations. Recently, clustering-based unsupervised ReID attracts more and more at- tention. It first clusters unlabeled images and assigns cluster index to the pseudo-identity-labels, then trains a ReID model with the pseudo-identity-labels. However, considering the slight inter-class variations and significant intra-class variations, pseudo-identity- labels learned from clustering algorithms are usually noisy and coarse. To alleviate the problems above, besides clustering pseudo- identity-labels, we propose to learn pseudo-patch-labels, which brings two advantages: (1) Patch naturally alleviates the effect of backgrounds, occlusions, and carryings since they usually occupy small parts in images, thus overcome noisy labels. (2) It is plausible that patches from different pedestrians belong to the same pseudo- identity-label. For example, pedestrians have a high probability of wearing either the same shoes or pants but a low possibility of wearing both. The experiments demonstrate our proposed method achieves the best performance by a large margin on both image- and video-based datasets.

Resources

Files: [pdf]

Citation:

@article{yin2021multi,
title={Multi-view label prediction for unsupervised learning person re-identification},
author={Yin, Qingze and Ding, Guodong and Gong, Shaogang and Tang, Zhenmin and others},
journal={IEEE Signal Processing Letters},
volume={28},
pages={1390–1394},
year={2021},
publisher={IEEE}
}