Hyperspectral Image Classification with Context-Aware Dynamic Graph Convolutional Network

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Geoscience and Remote Sensing, 2021, 59, (1), pp. 597-612
Issue Date:
2021-01-01
Full metadata record
In hyperspectral image (HSI) classification, spatial context has demonstrated its significance in achieving promising performance. However, conventional spatial context-based methods simply assume that spatially neighboring pixels should correspond to the same land-cover class, so they often fail to correctly discover the contextual relations among pixels in complex situations, and thus leading to imperfect classification results on some irregular or inhomogeneous regions such as class boundaries. To address this deficiency, we develop a new HSI classification method based on the recently proposed graph convolutional network (GCN), as it can flexibly encode the relations among arbitrarily structured non-Euclidean data. Different from traditional GCN, there are two novel strategies adopted by our method to further exploit the contextual relations for accurate HSI classification. First, since the receptive field of traditional GCN is often limited to fairly small neighborhood, we proposed to capture long-range contextual relations in HSI by performing successive graph convolutions on a learned region-induced graph which is transformed from the original 2-D image grids. Second, we refine the graph edge weight and the connective relationships among image regions simultaneously by learning the improved similarity measurement and the 'edge filter,' so that the graph can be gradually refined to adapt to the representations generated by each graph convolutional layer. Such updated graph will in turn result in faithful region representations, and vice versa. The experiments carried out on four real-world benchmark data sets demonstrate the effectiveness of the proposed method.
Please use this identifier to cite or link to this item: