Indexed by:
Abstract:
The sparsity and irregular data distribution in graph convolutional network (GCN) challenge efficient inference. Previous FPGA-based GCN accelerators have implemented targeted designs for efficient memory access and matrix computation. However, they often overlook the symmetric sparse matrices (SSM) involved in GCN calculations. Specifically, the adjacency matrices in GCNs lead to numerous symmetric sparse matrix multiplications, presenting significant potential for data reuse, including data storage format, memory access strategy, and computation method. To address these challenges, this work proposes a novel GCN accelerator that improves both data storage and access efficiency while enhancing computational performance. First, this paper introduces a compression format compatible with both regular sparse matrices and symmetric sparse matrices, called Special Packet-level Column-only Coordinate-list (SPCOO). SPCOO reduces memory consumption while enhancing data reuse. Additionally, this work proposes a specialized processing element (PE) to handle both symmetric sparse matrix multiplication (SSpMM) and regular sparse matrix multiplication (SpMM) simultaneously. All computations are executed on the unified PE to improve computational efficiency. Finally, SSM-GCN was deployed and validated on the Alveo U50 accelerator card. Experimental results show that the proposed SPCOO format is compatible with symmetric sparse matrices and achieves the lowest storage overhead compared to previous compression formats. SSM-GCN demonstrates an average inference speedup of 230.75× over the CPU and 10.37× over the GPU. In terms of energy efficiency, SSM-GCN achieves an average improvement of 2748.27× compared to the GPU. Furthermore, compared to state-of-the-art FPGA-based GCN accelerators, SSM-GCN improves DSP efficiency by 3.81×. © 2025 The Authors
Keyword:
Reprint 's Address:
Email:
Source :
Journal of Engineering Research (Kuwait)
ISSN: 2307-1877
Year: 2025
0 . 9 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: