GrowCLIP: Data-aware Automatic Model Growing for Large-scale Contrastive Language-Image Pre-training

Publisher:
IEEE
Publication Type:
Conference Proceeding
Citation:
2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2024, 00, pp. 22121-22132
Issue Date:
2024-01-15
Filename Description Size
1700586.pdfPublished version1.29 MB
Adobe PDF
Full metadata record
Cross modal pre training has shown impressive performance on a wide range of downstream tasks benefiting from massive image text pairs collected from the Internet In practice online data are growing constantly highlighting the importance of the ability of pre trained model to learn from data that is continuously growing Existing works on cross modal pre training mainly focus on training a network with fixed architecture However it is impractical to limit the model capacity when considering the continuously growing nature of pre training data in real world applications On the other hand it is important to utilize the knowledge in the current model to obtain efficient training and better performance To address the above issues in this paper we propose GrowCLIP a data driven automatic model growing algorithm for contrastive language image pre training with continuous image text pairs as input Specially we adopt a dynamic growth space and seek out the optimal architecture at each growth step to adapt to online learning scenarios And the shared encoder is proposed in our growth space to enhance the degree of cross modal fusion Besides we explore the effect of growth in different dimensions which could provide future references for the design of cross modal model architecture Finally we employ parameter inheriting with momentum PIM to maintain the previous knowledge and address the issue of the local minimum dilemma Compared with the existing methods GrowCLIP improves 2 3 average top 1 accuracy on zero shot image classification of 9 downstream tasks As for zero shot image retrieval GrowCLIP can improve 1 2 for top 1 image to text recall on Flickr30K dataset
Please use this identifier to cite or link to this item: