Enhancing Graph Neural Networks by a High-quality Aggregation of Beneficial Information.

Publisher:
Elsevier BV
Publication Type:
Journal Article
Citation:
Neural Netw, 2021, 142, pp. 20-33
Issue Date:
2021-10
Filename Description Size
1-s2.0-S0893608021001623-main.pdfPublished version963.8 kB
Adobe PDF
Full metadata record
Graph Neural Networks (GNNs), such as GCN, GraphSAGE, GAT, and SGC, have achieved state-of-the-art performance on a wide range of graph-based tasks. These models all use a technique called neighborhood aggregation, in which the embedding of each node is updated by aggregating the embeddings of its neighbors. However, not all information aggregated from neighbors is beneficial. In some cases, a portion of the neighbor information may be harmful to the downstream tasks. For the high-quality aggregation of beneficial information, we propose a flexible method EGAI (Enhancing Graph neural networks by a high-quality Aggregation of beneficial Information). The core concept of this method is to filter out the redundant and harmful information by removing specific edges during each training epoch. The practical and theoretical motivations, considerations, and strategies related to this method are discussed in detail. EGAI is a general method that can be combined with many backbone models (e.g., GCN, GraphSAGE, GAT, and SGC) to enhance their performance in the node classification task. In addition, EGAI reduces the convergence speed of over-smoothing that occurs when models are deepened. Extensive experiments on three real-world networks demonstrate that EGAI indeed improves the performance for both shallow and deep GNN models, and to some extent, mitigates over-smoothing. The code is available at https://github.com/liucoo/egai.
Please use this identifier to cite or link to this item: