Indexed by:
Abstract:
Graph Neural Networks (GNNs) have expressed remarkable capability in processing graph-structured data. Recent studies have found that most GNNs rely on the homophily assumption of graphs, leading to unsatisfactory performance on heterophilous graphs. While certain methods have been developed to address heterophilous links, they lack more precise estimation of high-order relationships between nodes. This could result in the aggregation of excessive interference information during message propagation, thus degrading the representation ability of learned features. In this work, we propose a Disparity-induced Structural Refinement (DSR) method that enables adaptive and selective message propagation in GNN, to enhance representation learning in heterophilous graphs. We theoretically analyze the necessity of structural refinement during message passing grounded in the derivation of error bound for node classification. To this end, we design a disparity score that combines both features and structural information at the node level, reflecting the connectivity degree of hopping neighbor nodes. Based on the disparity score, we can adjust the aggregation of neighbor nodes, thereby mitigating the impact of irrelevant information during message passing. Experimental results demonstrate that our method achieves competitive performance, mostly outperforming advanced methods on both homophilous and heterophilous datasets. © 2025 Copyright held by the owner/author(s).
Keyword:
Reprint 's Address:
Email:
Source :
Year: 2025
Page: 2209-2221
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: