Image-Image Domain Adaptation with Preserved Self-Similarity and Domain-Dissimilarity for Person Re-identification
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 994 - 1003
- Issue Date:
- 2018-12-14
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
© 2018 IEEE. Person re-identification (re-ID) models trained on one domain often fail to generalize well to another. In our attempt, we present a 'learning via translation' framework. In the baseline, we translate the labeled images from source to target domain in an unsupervised manner. We then train re-ID models with the translated images by supervised methods. Yet, being an essential part of this framework, unsupervised image-image translation suffers from the information loss of source-domain labels during translation. Our motivation is two-fold. First, for each image, the discriminative cues contained in its ID label should be maintained after translation. Second, given the fact that two domains have entirely different persons, a translated image should be dissimilar to any of the target IDs. To this end, we propose to preserve two types of unsupervised similarities, 1) self-similarity of an image before and after translation, and 2) domain-dissimilarity of a translated source image and a target image. Both constraints are implemented in the similarity preserving generative adversarial network (SPGAN) which consists of an Siamese network and a CycleGAN. Through domain adaptation experiment, we show that images generated by SPGAN are more suitable for domain adaptation and yield consistent and competitive re-ID accuracy on two large-scale datasets.
Please use this identifier to cite or link to this item: