Beyond modality alignment: Learning part-level representation for visible-infrared person re-identification
- Publisher:
- ELSEVIER
- Publication Type:
- Journal Article
- Citation:
- Image and Vision Computing, 2021, 108
- Issue Date:
- 2021-04-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1-s2.0-S0262885621000238-main.pdf | Published version | 1.65 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Visible-Infrared person re-IDentification (VI-reID) aims to automatically retrieve the pedestrian of interest exposed to sensors in different modalities, such as visible camera v.s. infrared sensor. It struggles to learn both modality-invariant and discriminant representations. Unfortunately, existing VI-reID work mainly focuses on tackling the modality difference, which fine-grained level discriminant information has not been well investigated. This causes inferior identification performance. To address the problem, we propose a Dual-Alignment Part-aware Representation (DAPR) framework to simultaneously alleviate the modality bias and mine different level of discriminant representations. Particularly, our DAPR reduces modality discrepancy of high-level features hierarchically by back-propagating reversal gradients from a modality classifier, in order to learn a modality-invariant feature space. And meanwhile, multiple heads of classifiers with the improved part-aware BNNeck are integrated to supervise the network producing identity-discriminant representations w.r.t. both local details and global structures in the learned modality-invariant space. By training in an end-to-end manner, the proposed DAPR produces camera-modality-invariant yet discriminant features1 for the purpose of person matching across modalities. Extensive experiments are conducted on two benchmarks, i.e., SYSU MM01 and RegDB, and the results demonstrate the effectiveness of our proposed method.
Please use this identifier to cite or link to this item: