• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Zhuang, Jianzhi (Zhuang, Jianzhi.) [1] | Li, Jin (Li, Jin.) [2] | Shi, Chenjunhao (Shi, Chenjunhao.) [3] | Lin, Xinyi (Lin, Xinyi.) [4] | Fu, Yang-Geng (Fu, Yang-Geng.) [5] (Scholars:傅仰耿)

Indexed by:

EI Scopus SCIE

Abstract:

Graph representation learning is a crucial area in machine learning, with widespread applications in social networks, recommendation systems, and traffic flow prediction. Recently, Graph Transformers have emerged as powerful tools for this purpose, garnering significant attention. In this work, we observe a fundamental issue of previous Graph Transformers that they overlook the scale-related information gap and often employ an identical attention computation method for different-scale node interactions, leading to suboptimality of model performance. To address this, we propose a Multi-Scale Attention Graph Transformer (MSA-GT) that enables each node to conduct adaptive interactions conditioned on different scales from both local and global perspectives. Specifically, MSA-GT guides several attention mechanisms to focus on individual scales and then perform customized combinations via an attention-based fusion module, thereby obtaining much more semantically fine-grained node representations. Despite the potential of the above design, we still observe over- fitting to some extent, which is atypical challenge for training Graph Transformers. We propose two additional technical components to prevent over-fitting and improve the performance further. We first introduce a path- based pruning strategy to reduce ineffective attention interactions, facilitating more accurate relevant node selection. Additionally, we propose a Heterophilous Curriculum Augmentation (HCA) module, which gradually increases the training difficulty, forming a weak-to-strong regularization schema and therefore enhancing the model's generalization ability step-by-step. Extensive experiments show that our method outperforms many state-of-the-art methods on eight public graph benchmarks, proving its effectiveness.

Keyword:

Curriculum learning Graph Transformer Multi-scale attention Node classification Representation learning

Community:

  • [ 1 ] [Zhuang, Jianzhi]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
  • [ 2 ] [Shi, Chenjunhao]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
  • [ 3 ] [Lin, Xinyi]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
  • [ 4 ] [Fu, Yang-Geng]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China
  • [ 5 ] [Li, Jin]Hong Kong Univ Sci & Technol Guangzhou, AI Thrust, Informat Hub, Guangzhou, Peoples R China

Reprint 's Address:

  • [Fu, Yang-Geng]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350108, Peoples R China;;

Show more details

Related Keywords:

Source :

KNOWLEDGE-BASED SYSTEMS

ISSN: 0950-7051

Year: 2025

Volume: 309

7 . 2 0 0

JCR@2023

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Online/Total:352/10051854
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1