Indexed by:
Abstract:
Multi-view learning based on graph convolutional networks boosts performance by incorporating diverse perspectives, leading to significant achievements and successful applications across various academic and practical fields. However, multi-view graph convolutional networks suffer from substantial computational challenges on large-scale graphs. To address this limitation, graph condensation has emerged as a promising direction by creating a smaller composite graph that allows for efficient network training while preserving performance. Furthermore, previous studies have demonstrated that encouraging performance in graph learning is achieved via graph compression. To this end, we attempt to introduce graph condensation into the multi-view learning for computation acceleration. This approach not only reduces training costs significantly but also achieves sub-linear time complexity and memory consumption during network training. Further, we propose a gradient flow induced graph convolutional network from partial differential equations, which offers theoretical guarantees and potential new insights for the graph-related network architecture construction with transparent model interpretability. Extensive experiments on seven real-world multi-view datasets demonstrate that the proposed method sharply decreases model training time while ensuring competitive multi-view semi-supervised classification.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
NEUROCOMPUTING
ISSN: 0925-2312
Year: 2025
Volume: 656
5 . 5 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: