摘要
The existing supervised visible infrared person re-identification methods require a lot of human resources to manually label the data and they fail to adapt to the generalization of real and changeable application scenes due to the limitation by the labeled data scene. In this paper, an unsupervised cross-modality person re-identification method based on semantic pseudo-label and dual feature memory banks is proposed. Firstly, a pre-training method based on the contrast learning framework is proposed, using the visible image and its generated auxiliary gray image for training. The pre-training method is employed to obtain the semantic feature extraction network that is robust to color changes. Then, semantic pseudo-label is generated by density based spatial clustering of applications with noise (DBSCAN) clustering method. Compared with the existing pseudo-label generation methods, the proposed method makes full use of the structural information between the cross-modality data in the generation process, and thus the modality discrepancy caused by the color change of the cross-modality data is reduced. In addition, an instance-level hard sample feature memory bank and a centroid-level clustering feature memory bank are constructed to make the model more robust to noise pseudo-label by hard sample features and clustering features. Experimental results obtained on two cross-modality datasets, SYSU-MM01 and RegDB, demonstrate the effectiveness of the proposed method. ? 2022 Journal of Pattern Recognition and Artificial Intelligence.
- 单位