Scalable person re-identification: A benchmark
- Publication Type:
- Conference Proceeding
- Citation:
- Proceedings of the IEEE International Conference on Computer Vision, 2015, 2015 International Conference on Computer Vision, ICCV 2015 pp. 1116 - 1124
- Issue Date:
- 2015-02-17
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
07410490.pdf | Published version | 763.14 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2015 IEEE. This paper contributes a new high quality dataset for person re-identification, named "Market-1501". Generally, current datasets: 1) are limited in scale, 2) consist of hand-drawn bboxes, which are unavailable under realistic settings, 3) have only one ground truth and one query image for each identity (close environment). To tackle these problems, the proposed Market-1501 dataset is featured in three aspects. First, it contains over 32,000 annotated bboxes, plus a distractor set of over 500K images, making it the largest person re-id dataset to date. Second, images in Market-1501 dataset are produced using the Deformable Part Model (DPM) as pedestrian detector. Third, our dataset is collected in an open system, where each identity has multiple images under each camera. As a minor contribution, inspired by recent advances in large-scale image search, this paper proposes an unsupervised Bag-of-Words descriptor. We view person re-identification as a special task of image search. In experiment, we show that the proposed descriptor yields competitive accuracy on VIPeR, CUHK03, and Market-1501 datasets, and is scalable on the large-scale 500k dataset.
Please use this identifier to cite or link to this item: