Faithful and fair generative explainers for graph neural networks
- Publication Type:
- Thesis
- Issue Date:
- 2024
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Graph Neural Networks (GNNs) have demonstrated remarkable effectiveness across various real-world applications; however, their underlying mechanisms remain a mystery. Explaining GNNs is crucial for understanding their complex underlying mechanisms, ensuring application safety, and enhancing model reliability. Research in this domain is broadly categorized into two approaches: factual explanation (FE) and counterfactual explanation (CFE). FE focuses on identifying key subgraphs or features that contribute to GNN decisions, while CFE explores minimal modifications to input graphs to achieve desired predictions. This thesis investigates FE and CFE explainers, addressing three critical research questions.
For generating FE, the novel GAN-GNNExplainer utilizes a Generative Adversarial Network (GAN) framework to refine explanations through generator-discriminator interactions. While improving explanation accuracy, GAN-GNNExplainer struggles with reliability and fidelity on real-world data. To address this, the enhanced ACGAN-GNNExplainer incorporates an Auxiliary Classifier GAN, improving fidelity and outperforming existing methods in synthetic and real-world datasets.
On the CFE front, current methods often require extensive training data and fail to ensure fairness. To overcome these issues, the proposed fairCFE employs a deep decoder conditioned on predetermined predictions, optimizing fairness through a novel loss function. Extensive experiments demonstrate that fairCFE generates high-quality CFE without requiring additional training data, establishing its superiority over baselines.
Please use this identifier to cite or link to this item: