An Empirical Study Towards Prompt-Tuning for Graph Contrastive Pre-Training in Recommendations

Publication Type:
Conference Proceeding
Citation:
Advances in Neural Information Processing Systems, 2023, 36
Issue Date:
2023-01-01
Full metadata record
Graph contrastive learning (GCL) has emerged as an effective technology for various graph learning tasks. It has been successfully applied in real-world recommender systems, where the contrastive loss and downstream recommendation objectives are combined to form the overall objective function. However, this approach deviates from the original GCL paradigm, which pre-trains graph embeddings without involving downstream training objectives. In this paper, we propose a novel framework called CPTPP, which enhances GCL-based recommender systems by leveraging prompt tuning. This framework allows us to fully exploit the advantages of the original GCL protocol. Specifically, we first summarize user profiles in graph recommender systems to automatically generate personalized user prompts. These prompts are then combined with pre-trained user embeddings for prompt tuning in downstream tasks. This helps bridge the gap between pre-training and downstream tasks. Our extensive experiments on three benchmark datasets confirm the effectiveness of CPTPP compared to state-of-the-art baselines. Additionally, a visualization experiment illustrates that user embeddings generated by CPTPP have a more uniform distribution, indicating improved modeling capability for user preferences. The implementation code is available online2 for reproducibility.
Please use this identifier to cite or link to this item: