Counterfactual Explainable Conversational Recommendation

Publisher:
Institute of Electrical and Electronics Engineers
Publication Type:
Journal Article
Citation:
IEEE Transactions on Knowledge and Data Engineering, 2024, 36, (6), pp. 2388-2400
Issue Date:
2024-01-01
Filename Description Size
8E847819-20AE-4FA3-9E9C-FB90E9D5B97D am.pdfAccepted version4.11 MB
Adobe PDF
Full metadata record
Conversational Recommender Systems (CRSs) fundamentally differ from traditional recommender systems by interacting with users in a conversational session to accurately predict their current preferences and provide personalized recommendations. Although current CRSs have achieved favorable recommendation performance, the explainability is still in its infancy stage. Most of the CRSs tend to provide coarse explanations and fail to explore the impact of minimal alterations on the recommendation decisions on items. In this paper, we are the first to incorporate the counterfactual techniques into CRS and propose a Counterfactual Explainable Conversational Recommender (CECR) to enhance the recommendation model from a counterfactual perspective. Counterfactual explanations can offer fine-grained reasons to explain users' real-time intentions, meanwhile generating counterfactual samples for augmenting the training dataset to enhance recommendation performance. Specifically, CECR adaptively learns users' preferences based on the conversation context and effectively responds to users' real-time feedback during multiple rounds of conversation. Furthermore, CECR actively generates counterfactual samples to augment the training set and thus leading to a constant improvement in recommendation performance. Empirical experiments carried out on three benchmark datasets show that our CECR outperforms state-of-the-art CRSs in terms of recommendation performance and explainability
Please use this identifier to cite or link to this item: