• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:傅仰耿

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 10 >
DWSSA: Alleviating over-smoothness for deep Graph Neural Networks SCIE
期刊论文 | 2024 , 174 | NEURAL NETWORKS
Abstract&Keyword Cite

Abstract :

Graph Neural Networks (GNNs) have demonstrated great potential in achieving outstanding performance in various graph -related tasks, e.g., graph classification and link prediction. However, most of them suffer from the following issue: shallow networks capture very limited knowledge. Prior works design deep GNNs with more layers to solve the issue, which however introduces a new challenge, i.e., the infamous oversmoothness. Graph representation over emphasizes node features but only considers the static graph structure with a uniform weight are the key reasons for the over -smoothness issue. To alleviate the issue, this paper proposes a Dynamic Weighting Strategy (DWS) for addressing over -smoothness. We first employ Fuzzy CMeans (FCM) to cluster all nodes into several groups and get each node's fuzzy assignment, based on which a novel metric function is devised for dynamically adjusting the aggregation weights. This dynamic weighting strategy not only enables the intra-cluster interactions, but also inter -cluster aggregations, which well addresses undifferentiated aggregation caused by uniform weights. Based on DWS, we further design a Structure Augmentation (SA) step for addressing the issue of underutilizing the graph structure, where some potentially meaningful connections (i.e., edges) are added to the original graph structure via a parallelable KNN algorithm. In general, the optimized Dynamic Weighting Strategy with Structure Augmentation (DWSSA) alleviates over -smoothness by reducing noisy aggregations and utilizing topological knowledge. Extensive experiments on eleven homophilous or heterophilous graph benchmarks demonstrate the effectiveness of our proposed method DWSSA in alleviating over -smoothness and enhancing deep GNNs performance.

Keyword :

Clustering Clustering Deep graph neural networks Deep graph neural networks Node classification Node classification Over-smoothness Over-smoothness Structure augmentation Structure augmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Qirong , Li, Jin , Ye, Qingqing et al. DWSSA: Alleviating over-smoothness for deep Graph Neural Networks [J]. | NEURAL NETWORKS , 2024 , 174 .
MLA Zhang, Qirong et al. "DWSSA: Alleviating over-smoothness for deep Graph Neural Networks" . | NEURAL NETWORKS 174 (2024) .
APA Zhang, Qirong , Li, Jin , Ye, Qingqing , Lin, Yuxi , Chen, Xinlong , Fu, Yang-Geng . DWSSA: Alleviating over-smoothness for deep Graph Neural Networks . | NEURAL NETWORKS , 2024 , 174 .
Export to NoteExpress RIS BibTex

Version :

Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs EI
会议论文 | 2024 , 38 (12) , 13528-13536 | 38th AAAI Conference on Artificial Intelligence, AAAI 2024
Abstract&Keyword Cite

Abstract :

Despite Graph neural networks’ significant performance gain over many classic techniques in various graph-related downstream tasks, their successes are restricted in shallow models due to over-smoothness and the difficulties of optimizations among many other issues. In this paper, to alleviate the over-smoothing issue, we propose a soft graph normalization method to preserve the diversities of node embeddings and prevent indiscrimination due to possible over-closeness. Combined with residual connections, we analyze the reason why the method can effectively capture the knowledge in both input graph structures and node features even with deep networks. Additionally, inspired by Curriculum Learning that learns easy examples before the hard ones, we propose a novel label-smoothing-based learning framework to enhance the optimization of deep GNNs, which iteratively smooths labels in an auxiliary graph and constructs many gradual non-smooth tasks for extracting increasingly complex knowledge and gradually discriminating nodes from coarse to fine. The method arguably reduces the risk of overfitting and generalizes better results. Finally, extensive experiments are carried out to demonstrate the effectiveness and potential of the proposed model and learning framework through comparison with twelve existing baselines including the state-of-the-art methods on twelve real-world node classification benchmarks. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Keyword :

Curricula Curricula Graphic methods Graphic methods Graph neural networks Graph neural networks Graph structures Graph structures Graph theory Graph theory Iterative methods Iterative methods Learning systems Learning systems

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jin , Zhang, Qirong , Xu, Shuling et al. Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs [C] . 2024 : 13528-13536 .
MLA Li, Jin et al. "Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs" . (2024) : 13528-13536 .
APA Li, Jin , Zhang, Qirong , Xu, Shuling , Chen, Xinlong , Guo, Longkun , Fu, Yang-Geng . Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs . (2024) : 13528-13536 .
Export to NoteExpress RIS BibTex

Version :

A novel extended rule-based system based on K-Nearest Neighbor graph SCIE
期刊论文 | 2024 , 662 | INFORMATION SCIENCES
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

The Belief Rule -Based (BRB) system faces the rule combination explosion issue, making it challenging to construct the rule base efficiently. The Extended Belief Rule -Based (EBRB) system offers a solution to this problem by using data -driven methods. However, using EBRB system requires the traversal of the entire rule base, which can be time-consuming and result in the activation of many irrelevant rules, leading to an incorrect decision. Existing search optimization methods can somewhat solve this issue, but they have limitations. Moreover, the calculation of the rule activation weight only considers the similarity between input data and a single rule, ignoring the influence of the rule linkage. To address these problems, we propose a new EBRB system based on the K -Nearest Neighbor graph index (Graph-EBRB). We introduce the Hierarchical Navigable Small World (HNSW) algorithm to create the K -Nearest Neighbor graph index of the EBRB system. This index allows us to efficiently search and activate a set of key rules. We also propose a new activation weight calculation method based on the Graph Convolution Neural Network (GCN), and we optimize the system performance using a parameter learning strategy. We conduct a comprehensive experiment on 14 commonly used public data sets, and the results show that Graph-EBRB system significantly improves the reasoning efficiency and accuracy of the EBRB system. Finally, we apply the Graph-EBRB system to tree disease identification and achieve excellent classification performance, identifying over 90% of the diseased trees on the complete dataset.

Keyword :

Extended belief rule-based system Extended belief rule-based system Graph convolution neural network Graph convolution neural network Hierarchical navigable small world graph Hierarchical navigable small world graph K-Nearest Neighbor graph K-Nearest Neighbor graph

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Fu, Yang-Geng , Lin, Xin-Yi , Fang, Geng-Chao et al. A novel extended rule-based system based on K-Nearest Neighbor graph [J]. | INFORMATION SCIENCES , 2024 , 662 .
MLA Fu, Yang-Geng et al. "A novel extended rule-based system based on K-Nearest Neighbor graph" . | INFORMATION SCIENCES 662 (2024) .
APA Fu, Yang-Geng , Lin, Xin-Yi , Fang, Geng-Chao , Li, Jin , Cai, Hong-Yi , Gong, Xiao-Ting et al. A novel extended rule-based system based on K-Nearest Neighbor graph . | INFORMATION SCIENCES , 2024 , 662 .
Export to NoteExpress RIS BibTex

Version :

Exploiting negative correlation for unsupervised anomaly detection in contaminated time series SCIE
期刊论文 | 2024 , 249 | EXPERT SYSTEMS WITH APPLICATIONS
Abstract&Keyword Cite

Abstract :

Anomaly detection in time series data is crucial for many fields such as healthcare, meteorology, and industrial fault detection. However, traditional unsupervised time series anomaly detection methods suffer from biased anomaly measurement under contaminated training data. Most of existing methods employ hard strategies for contamination calibration by assigning pseudo -label to training data. These hard strategies rely on threshold selection and result in suboptimal performance. To address this problem, in this paper, we propose a novel unsupervised anomaly detection framework for contaminated time series (NegCo), which builds an effective soft contamination calibration strategy by exploiting the observed negative correlation between semantic representation and anomaly detection inherent within the autoencoder framework. We innovatively redefine anomaly detection in data contamination scenarios as an optimization problem rooted in this negative correlation. To model this negative correlation, we introduce a dual construct: morphological similarity captures semantic distinctions relevant to normality, while reconstruction consistency quantifies deviations indicative of anomalies. Firstly, the morphological similarity is effectively measured based on the representative normal samples generated from the center of the learned Gaussian distribution. Then, an anomaly measurement calibration loss function is designed based on negative correlation between morphological similarity and reconstruction consistency, to calibrate the biased anomaly measurement caused by contaminated samples. Extensive experiments on various time series datasets show that the proposed NegCo outperforms stateof-the-art baselines, achieving an improvement of 6.2% to 26.8% in Area Under the Receiver Operating Characteristics (AUROC) scores, particularly in scenarios with heavily contaminated training data.

Keyword :

Anomaly detection Anomaly detection Data contamination Data contamination Negative correlation Negative correlation Time series Time series

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Xiaohui , Li, Zuoyong , Fan, Haoyi et al. Exploiting negative correlation for unsupervised anomaly detection in contaminated time series [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 249 .
MLA Lin, Xiaohui et al. "Exploiting negative correlation for unsupervised anomaly detection in contaminated time series" . | EXPERT SYSTEMS WITH APPLICATIONS 249 (2024) .
APA Lin, Xiaohui , Li, Zuoyong , Fan, Haoyi , Fu, Yanggeng , Chen, Xinwei . Exploiting negative correlation for unsupervised anomaly detection in contaminated time series . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 249 .
Export to NoteExpress RIS BibTex

Version :

TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION Scopus
其他 | 2024
Abstract&Keyword Cite

Abstract :

Recent studies have shown that Graph Transformers (GTs) can be effective for specific graph-level tasks. However, when it comes to node classification, training GTs remains challenging, especially in semi-supervised settings with a severe scarcity of labeled data. Our paper aims to address this research gap by focusing on semi-supervised node classification. To accomplish this, we develop a curriculum-enhanced attention distillation method that involves utilizing a Local GT teacher and a Global GT student. Additionally, we introduce the concepts of in-class and out-of-class and then propose two improvements, out-of-class entropy and top-k pruning, to facilitate the student's out-of-class exploration under the teacher's in-class guidance. Taking inspiration from human learning, our method involves a curriculum mechanism for distillation that initially provides strict guidance to the student and gradually allows for more out-of-class exploration by a dynamic balance. Extensive experiments show that our method outperforms many state-of-the-art methods on seven public graph benchmarks, proving its effectiveness. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Y. , Li, J. , Chen, X. et al. TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [未知].
MLA Huang, Y. et al. "TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION" [未知].
APA Huang, Y. , Li, J. , Chen, X. , Fu, Y.-G. . TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [未知].
Export to NoteExpress RIS BibTex

Version :

TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION EI
会议论文 | 2024 | 12th International Conference on Learning Representations, ICLR 2024
Abstract&Keyword Cite

Abstract :

Recent studies have shown that Graph Transformers (GTs) can be effective for specific graph-level tasks. However, when it comes to node classification, training GTs remains challenging, especially in semi-supervised settings with a severe scarcity of labeled data. Our paper aims to address this research gap by focusing on semi-supervised node classification. To accomplish this, we develop a curriculum-enhanced attention distillation method that involves utilizing a Local GT teacher and a Global GT student. Additionally, we introduce the concepts of in-class and out-of-class and then propose two improvements, out-of-class entropy and top-k pruning, to facilitate the student's out-of-class exploration under the teacher's in-class guidance. Taking inspiration from human learning, our method involves a curriculum mechanism for distillation that initially provides strict guidance to the student and gradually allows for more out-of-class exploration by a dynamic balance. Extensive experiments show that our method outperforms many state-of-the-art methods on seven public graph benchmarks, proving its effectiveness. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.

Keyword :

Curricula Curricula Distillation Distillation Students Students

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Yisong , Li, Jin , Chen, Xinlong et al. TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [C] . 2024 .
MLA Huang, Yisong et al. "TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION" . (2024) .
APA Huang, Yisong , Li, Jin , Chen, Xinlong , Fu, Yang-Geng . TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION . (2024) .
Export to NoteExpress RIS BibTex

Version :

基于曲率图卷积的非均匀点云掩码自编码器 PKU
期刊论文 | 2024 , 52 (01) , 1-6 | 福州大学学报(自然科学版)
Abstract&Keyword Cite

Abstract :

提出一种基于曲率图卷积的非均匀分组与掩码策略,用以优化掩码自编码器.首先,提出曲率图卷积以避免固定邻域导致的归纳偏差;其次,在曲率图卷积后引入图池化层,根据点云局部特征进行池化操作并分组;最后,在池化层输出特征的基础上学习每个分组的掩码概率来避免冗余.实验结果表明,本方法能有效提高点云掩码自编码器在下游任务的泛化效果,在ModelNet40上的分类精度达到93.7%,在Completion3Dv2上的补全精度达到5.08,均优于目前主流方法.

Keyword :

图卷积神经网络 图卷积神经网络 点云 点云 自监督学习 自监督学习 自编码器 自编码器 预训练 预训练

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 黄敏明 , 傅仰耿 . 基于曲率图卷积的非均匀点云掩码自编码器 [J]. | 福州大学学报(自然科学版) , 2024 , 52 (01) : 1-6 .
MLA 黄敏明 et al. "基于曲率图卷积的非均匀点云掩码自编码器" . | 福州大学学报(自然科学版) 52 . 01 (2024) : 1-6 .
APA 黄敏明 , 傅仰耿 . 基于曲率图卷积的非均匀点云掩码自编码器 . | 福州大学学报(自然科学版) , 2024 , 52 (01) , 1-6 .
Export to NoteExpress RIS BibTex

Version :

HopMAE: Self-supervised Graph Masked Auto-Encoders from a Hop Perspective Scopus
其他 | 2024 , 14876 LNAI , 343-355
Abstract&Keyword Cite

Abstract :

With increasing popularity and larger real-world applicability, graph self-supervised learning (GSSL) can significantly reduce labeling costs by extracting implicit input supervision. As a promising example, graph masked auto-encoders (GMAE) can encode rich node knowledge by recovering the masked input components, e.g., features or edges. Despite their competitiveness, existing GMAEs focus only on neighboring information reconstruction, which totally ignores distant multi-hop semantics and thus fails to capture global knowledge. Furthermore, many GMAEs cannot scale on large-scale graphs since they suffer from memory bottlenecks with unavoidable full-batch training. To address these challenges and facilitate “high-level” discriminative semantics, we propose a simple yet effective framework (i.e., HopMAE) to encourage hop-perspective semantic interactions by adopting multi-hop input-rich reconstruction while supporting mini-batch training. Despite the rationales of the above designs, we still observe some limitations (e.g., sub-optimal generalizability and training instability), potentially due to the implicit gap between the task-triviality and input-richness of reconstruction. Therefore, to alleviate task-triviality and fully unleash the potential of our framework, we further propose a combined fine-grained loss function, which generalizes the existing ones and significantly improves the difficulties of reconstruction tasks, thus naturally alleviating over-fitting. Extensive experiments on eight benchmarks demonstrate that our method comprehensively outperforms many state-of-the-art counterparts. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.

Keyword :

Graph Masked Auto-Encoders Graph Masked Auto-Encoders Graph Neural Networks Graph Neural Networks Graph Representation Learning Graph Representation Learning Self-Supervised Learning Self-Supervised Learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shi, C. , Li, J. , Zhuang, J. et al. HopMAE: Self-supervised Graph Masked Auto-Encoders from a Hop Perspective [未知].
MLA Shi, C. et al. "HopMAE: Self-supervised Graph Masked Auto-Encoders from a Hop Perspective" [未知].
APA Shi, C. , Li, J. , Zhuang, J. , Yao, X. , Huang, Y. , Fu, Y.-G. . HopMAE: Self-supervised Graph Masked Auto-Encoders from a Hop Perspective [未知].
Export to NoteExpress RIS BibTex

Version :

Curriculum-guided dynamic division strategy for graph contrastive learning EI
期刊论文 | 2024 , 300 | Knowledge-Based Systems
Abstract&Keyword Cite

Abstract :

Contrastive learning is a commonly used framework in the field of graph self-supervised learning, where models are trained by bringing positive samples closer together and pushing negative samples apart. Most existing graph contrastive learning models divide all nodes into positive and negative samples, which leads to the selection of some meaningless samples and reduces the model's performance. Additionally, there is a significant disparity in the ratio between positive and negative samples, with an excessive number of negative samples introducing noise. Therefore, we propose a novel dynamic sampling strategy that selects more meaningful samples from the perspectives of structure and features and we incorporate an iteration-based sample selection process into the model training to enhance its performance. Furthermore, we introduce a curriculum learning training method based on the principle of starting from easy to difficult. Sample training for each iteration is treated as a task, enabling the rapid capture of relevant and meaningful sample information. Extensive experiments have been conducted to validate the superior performance of our model across nine real-world datasets. © 2024 Elsevier B.V.

Keyword :

Curricula Curricula Graph neural networks Graph neural networks Iterative methods Iterative methods Learning systems Learning systems Nearest neighbor search Nearest neighbor search

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Yu-Xi , Zhang, Qi-Rong , Li, Jin et al. Curriculum-guided dynamic division strategy for graph contrastive learning [J]. | Knowledge-Based Systems , 2024 , 300 .
MLA Lin, Yu-Xi et al. "Curriculum-guided dynamic division strategy for graph contrastive learning" . | Knowledge-Based Systems 300 (2024) .
APA Lin, Yu-Xi , Zhang, Qi-Rong , Li, Jin , Gong, Xiao-Ting , Fu, Yang-Geng . Curriculum-guided dynamic division strategy for graph contrastive learning . | Knowledge-Based Systems , 2024 , 300 .
Export to NoteExpress RIS BibTex

Version :

Another Perspective of Over-Smoothing: Alleviating Semantic Over-Smoothing in Deep GNNs SCIE
期刊论文 | 2024 | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Abstract&Keyword Cite

Abstract :

Graph neural networks (GNNs) are widely used for analyzing graph-structural data and solving graph-related tasks due to their powerful expressiveness. However, existing off-the-shelf GNN-based models usually consist of no more than three layers. Deeper GNNs usually suffer from severe performance degradation due to several issues including the infamous "over-smoothing" issue, which restricts the further development of GNNs. In this article, we investigate the over-smoothing issue in deep GNNs. We discover that over-smoothing not only results in indistinguishable embeddings of graph nodes, but also alters and even corrupts their semantic structures, dubbed semantic over-smoothing. Existing techniques, e.g., graph normalization, aim at handling the former concern, but neglect the importance of preserving the semantic structures in the spatial domain, which hinders the further improvement of model performance. To alleviate the concern, we propose a cluster-keeping sparse aggregation strategy to preserve the semantic structure of embeddings in deep GNNs (especially for spatial GNNs). Particularly, our strategy heuristically redistributes the extent of aggregations for all the nodes from layers, instead of aggregating them equally, so that it enables aggregate concise yet meaningful information for deep layers. Without any bells and whistles, it can be easily implemented as a plug-and-play structure of GNNs via weighted residual connections. Last, we analyze the over-smoothing issue on the GNNs with weighted residual structures and conduct experiments to demonstrate the performance comparable to the state-of-the-arts.

Keyword :

Aggregates Aggregates Brain modeling Brain modeling Clustering Clustering Convolution Convolution deep graph neural networks (GNNs) deep graph neural networks (GNNs) Degradation Degradation node classification node classification Numerical models Numerical models over-smoothing over-smoothing Semantics Semantics sparse aggregation strategy sparse aggregation strategy Task analysis Task analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jin , Zhang, Qirong , Liu, Wenxi et al. Another Perspective of Over-Smoothing: Alleviating Semantic Over-Smoothing in Deep GNNs [J]. | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 .
MLA Li, Jin et al. "Another Perspective of Over-Smoothing: Alleviating Semantic Over-Smoothing in Deep GNNs" . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS (2024) .
APA Li, Jin , Zhang, Qirong , Liu, Wenxi , Chan, Antoni B. , Fu, Yang-Geng . Another Perspective of Over-Smoothing: Alleviating Semantic Over-Smoothing in Deep GNNs . | IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS , 2024 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 10 >

Export

Results:

Selected

to

Format:
Online/Total:708/7275679
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1