Robust Feature Rectification of Pretrained Vision Models for Object Recognition
- Publisher:
- Association for the Advancement of Artificial Intelligence (AAAI)
- Publication Type:
- Journal Article
- Citation:
- Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023, 2023, 37, (3), pp. 3796-3804
- Issue Date:
- 2023-06-27
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
25492-Article Text-29555-1-2-20230626.pdf | Published version | 1.06 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Pretrained vision models for object recognition often suffer a dramatic performance drop with degradations unseen during training. In this work, we propose a RObust FEature Rectification module (ROFER) to improve the performance of pretrained models against degradations. Specifically, ROFER first estimates the type and intensity of the degradation that corrupts the image features. Then, it leverages a Fully Convolutional Network (FCN) to rectify the features from the degradation by pulling them back to clear features. ROFER is a general-purpose module that can address various degradations simultaneously, including blur, noise, and low contrast. Besides, it can be plugged into pretrained models seamlessly to rectify the degraded features without retraining the whole model. Furthermore, ROFER can be easily extended to address composite degradations by adopting a beam search algorithm to find the composition order. Evaluations on CIFAR-10 and Tiny-ImageNet demonstrate that the accuracy of ROFER is 5% higher than that of SOTA methods on different degradations.With respect to composite degradations, ROFER improves the accuracy of a pretrained CNN by 10% and 6% on CIFAR-10 and Tiny-ImageNet respectively.
Please use this identifier to cite or link to this item: