• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Zhang, Chun-Yang (Zhang, Chun-Yang.) [1] (Scholars:张春阳) | Fang, Wu-Peng (Fang, Wu-Peng.) [2] | Cai, Hai-Chun (Cai, Hai-Chun.) [3] | Chen, C. L. Philip (Chen, C. L. Philip.) [4] | Lin, Yue-Na (Lin, Yue-Na.) [5]

Indexed by:

EI Scopus SCIE

Abstract:

Information aggregation and propagation over networks via graph neural networks (GNNs) plays an important role in node or graph representation learning, which currently depend on the calculation with a fixed adjacency matrix, facing over-smoothing problem, and difficulty to stack multiple layers for high-level representations. In contrast, Transformer calculates an importance score for each node to learn its embedding via the attention mechanism and has achieved great successes in many natural language processing (NLP) and computer vision (CV) tasks. However, Transformer is inflexible to extend to graphs, as its input and output must have the same dimension. It will also become intractable to allocate attention over a large-scale graph due to distractions. Moreover, most graph Transformers are trained in supervised ways, which consume additional resources to annotate samples with potentially wrong labels and have limited generalization of representations. Therefore, this article attempts to build a new Sparse Graph Transformer with Contrastive learning for graph representation learning, called SGTC. Specifically, we first employ centrality measures to remove the redundant topological information from input graph according to the influences of nodes and edges, then disturb the pruned graph to get two different augmentation views, and learn node representations in a contrastive manner. Besides, a novel sparse attention mechanism is also proposed to capture structural features of graphs, which effectively save memory and training time. SGTC can produce low-dimensional and high-order node representations, which have better generalization for multiple tasks. The proposed model is evaluated on three downstream tasks over six networks, and experimental results confirm its superior performance against the state-of-the-art baselines.

Keyword:

Graph pruning graph representation learning graph Transformer sparse attention unsupervised learning

Community:

  • [ 1 ] [Zhang, Chun-Yang]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350002, Peoples R China
  • [ 2 ] [Fang, Wu-Peng]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350002, Peoples R China
  • [ 3 ] [Cai, Hai-Chun]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350002, Peoples R China
  • [ 4 ] [Lin, Yue-Na]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350002, Peoples R China
  • [ 5 ] [Chen, C. L. Philip]South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510006, Guangdong, Peoples R China

Reprint 's Address:

  • [Fang, Wu-Peng]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350002, Peoples R China;;[Cai, Hai-Chun]Fuzhou Univ, Coll Comp & Data Sci, Fuzhou 350002, Peoples R China;;

Show more details

Version:

Related Keywords:

Source :

IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS

ISSN: 2329-924X

Year: 2022

Issue: 1

Volume: 11

Page: 892-904

5 . 0

JCR@2022

4 . 5 0 0

JCR@2023

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:61

JCR Journal Grade:1

CAS Journal Grade:2

Cited Count:

WoS CC Cited Count: 1

SCOPUS Cited Count: 2

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Online/Total:66/10045556
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1