Unsupervised Domain Adaptation Enhanced by Fuzzy Prompt Learning

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Fuzzy Systems, 2024, 32, (7), pp. 4038-4048
Issue Date:
2024-01-01
Filename Description Size
1723376.pdfPublished version1.97 MB
Adobe PDF
Full metadata record
Unsupervised domain adaptation (UDA) addresses the challenge of distribution shift between a labeled source domain and an unlabeled target domain by utilizing knowledge from the source. Traditional UDA methods mainly focus on single-modal scenarios, either vision or language, thus, not fully exploring the advantages of multimodal representations. Visionlanguage models utilize multimodal information, applying prompt learning techniques for addressing target domain tasks. Motivated by the recent advancements in pretrained visionlanguage models, this article expands the UDA framework to incorporate multimodal approaches using fuzzy techniques. The adoption of fuzzy techniques, preferred over conventional domain adaptation methods, is based on the following two key aspects: 1) the nature of prompt learning is intrinsically linked to fuzzy logic, and 2) the superior capability of fuzzy techniques in processing soft information and effectively utilizing inherent relationships both within and across domains. To this end, we propose UDA enhanced by fuzzy prompt learning (FUZZLE), a simple and effective method for aligning the source and target domains via domain-specific prompt learning. Specifically, we introduce a novel technique to enhance prompt learning in the target domain. This method integrates fuzzy C-means clustering and a novel instance-level fuzzy vector into the prompt learning loss function, minimizing the distance between prompt cluster centers and instance prompts, thereby, enhancing the prompt learning process. In addition, we propose a Kullback-Leibler (KL) divergence-based loss function with a fuzzification factor. This function is designed to minimize the distribution discrepancy in the classification of similar cross-domain data, aligning domain-specific prompts during the training process. We contribute an in-depth analysis to understand the effectiveness of FUZZLE. Extensive experiments demonstrate that our method achieves superior performance on standard UDA benchmarks.
Please use this identifier to cite or link to this item: