Query:
学者姓名:傅仰耿
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
图神经网络已经成功应用于各种与图相关的任务中.以有监督的方式训练一个图神经网络需要大量标签,而现实世界中受到成本制约难以获取大量标签,因此在小样本学习或半监督学习场景的标签就更为稀少.为了克服这个问题,许多方法通过标签传播的方法来估计标签,但通常会受到图上连接性和同质性假设的限制,容易生成带有噪声的伪标签.为了解决这些限制,本文提出了一个名为图超球面原型网络的新方法GHPN,专注于半监督小样本节点分类.为了减轻图结构对预测结果的影响,GHPN在超球面表示空间中建模类别表示,通过类级别表示在语义空间中传播标签信息.此外,为了利用未标记节点的监督信息,本文设计了一个基于原型网络预测结果的负学习框架,用于补充监督信号,调整各类别原型之间的距离.在5个真实世界的数据集上进行的实验表明,该方法与10个最先进的方法相比能够有效提高性能,在4个数据集上能取得平均排名最佳结果.
Keyword :
半监督学习 半监督学习 原型网络 原型网络 图表示学习 图表示学习 小样本学习 小样本学习 负学习 负学习
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 徐祖豪 , 陈鑫龙 , 李进 et al. GHPN:面向半监督小样本节点分类的图超球面原型网络 [J]. | 小型微型计算机系统 , 2025 , 46 (3) : 542-551 . |
MLA | 徐祖豪 et al. "GHPN:面向半监督小样本节点分类的图超球面原型网络" . | 小型微型计算机系统 46 . 3 (2025) : 542-551 . |
APA | 徐祖豪 , 陈鑫龙 , 李进 , 黄益颂 , 傅仰耿 . GHPN:面向半监督小样本节点分类的图超球面原型网络 . | 小型微型计算机系统 , 2025 , 46 (3) , 542-551 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为解决中文小样本命名实体识别(NER)任务所面临的问题和挑战,提出了一种面向中文小样本NER的BERT优化方法,该方法包含两方面的优化:首先,针对训练样本数量不足限制了预训练语言模型BERT的语义感知能力的问题,提出了 Pro-ConBERT,一种基于提示学习与对比学习的BERT预训练策略.在提示学习阶段,设计掩码填充模板来训练BERT预测出每个标记对应的中文标签词.在对比学习阶段,利用引导模板训练BERT学习每个标记和标签词之间的相似性与差异性.其次,针对中文缺乏明确的词边界所带来的复杂性和挑战性,修改BERT模型的第一层Transformer结构,并设计了一种带有混合权重引导器的特征融合模块,将词典信息集成到BERT底层中.最后,实验结果验证了所提方法在中文小样本NER任务中的有效性与优越性.该方法结合BERT和条件随机场(CRF)结构,在4个采样的中文NER数据集上取得了最好的性能.特别是在Weibo数据集的3个小样本场景下,模型的F1值分别达到了 63.78%、66.27%、70.90%,与其他方法相比,平均F1值分别提高了16.28%、14.30%、11.20%.此外,将ProConBERT应用到多个基于BERT的中文NER模型中能进一步提升实体识别的性能.
Keyword :
BERT模型 BERT模型 中文小样本命名实体识别 中文小样本命名实体识别 对比学习 对比学习 提示学习 提示学习 特征融合 特征融合 预训练 预训练
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 杨三和 , 赖沛超 , 傅仰耿 et al. 面向中文小样本命名实体识别的BERT优化方法 [J]. | 小型微型计算机系统 , 2025 , 46 (3) : 602-611 . |
MLA | 杨三和 et al. "面向中文小样本命名实体识别的BERT优化方法" . | 小型微型计算机系统 46 . 3 (2025) : 602-611 . |
APA | 杨三和 , 赖沛超 , 傅仰耿 , 王一蕾 , 叶飞扬 , 张林 . 面向中文小样本命名实体识别的BERT优化方法 . | 小型微型计算机系统 , 2025 , 46 (3) , 602-611 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
针对异构图神经网络模型依赖元路径和复杂聚合操作导致元路径受限与高成本的不足,提出一种基于注意力融合机制和拓扑关系挖掘的异构图神经网络模型(FTHGNN).该模型首先使用一种轻量级的注意力融合机制,融合全局关系信息和局部节点信息,以较低的时空开销实现更有效的消息聚合;接着使用一种无需先验知识的拓扑关系挖掘方法替代元路径方法,挖掘图上的高阶邻居关系,并引入对比学习捕获图上的高阶语义信息;最后,在4个广泛使用的现实世界异构图数据集上进行的充分实验,验证了 FTHGNN简单而高效,在分类预测准确率上超越了绝大多数现有模型.
Keyword :
图神经网络 图神经网络 对比学习 对比学习 异构图 异构图 注意力机制 注意力机制
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 陈金杰 , 王一蕾 , 傅仰耿 . 注意力融合机制和拓扑关系挖掘的异构图神经网络 [J]. | 福州大学学报(自然科学版) , 2025 , 53 (1) : 1-9 . |
MLA | 陈金杰 et al. "注意力融合机制和拓扑关系挖掘的异构图神经网络" . | 福州大学学报(自然科学版) 53 . 1 (2025) : 1-9 . |
APA | 陈金杰 , 王一蕾 , 傅仰耿 . 注意力融合机制和拓扑关系挖掘的异构图神经网络 . | 福州大学学报(自然科学版) , 2025 , 53 (1) , 1-9 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Graph representation learning is a crucial area in machine learning, with widespread applications in social networks, recommendation systems, and traffic flow prediction. Recently, Graph Transformers have emerged as powerful tools for this purpose, garnering significant attention. In this work, we observe a fundamental issue of previous Graph Transformers that they overlook the scale-related information gap and often employ an identical attention computation method for different-scale node interactions, leading to suboptimality of model performance. To address this, we propose a Multi-Scale Attention Graph Transformer (MSA-GT) that enables each node to conduct adaptive interactions conditioned on different scales from both local and global perspectives. Specifically, MSA-GT guides several attention mechanisms to focus on individual scales and then perform customized combinations via an attention-based fusion module, thereby obtaining much more semantically fine-grained node representations. Despite the potential of the above design, we still observe over- fitting to some extent, which is atypical challenge for training Graph Transformers. We propose two additional technical components to prevent over-fitting and improve the performance further. We first introduce a path- based pruning strategy to reduce ineffective attention interactions, facilitating more accurate relevant node selection. Additionally, we propose a Heterophilous Curriculum Augmentation (HCA) module, which gradually increases the training difficulty, forming a weak-to-strong regularization schema and therefore enhancing the model's generalization ability step-by-step. Extensive experiments show that our method outperforms many state-of-the-art methods on eight public graph benchmarks, proving its effectiveness.
Keyword :
Curriculum learning Curriculum learning Graph Transformer Graph Transformer Multi-scale attention Multi-scale attention Node classification Node classification Representation learning Representation learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhuang, Jianzhi , Li, Jin , Shi, Chenjunhao et al. Enhanced Graph Transformer: Multi-scale attention with Heterophilous Curriculum Augmentation [J]. | KNOWLEDGE-BASED SYSTEMS , 2025 , 309 . |
MLA | Zhuang, Jianzhi et al. "Enhanced Graph Transformer: Multi-scale attention with Heterophilous Curriculum Augmentation" . | KNOWLEDGE-BASED SYSTEMS 309 (2025) . |
APA | Zhuang, Jianzhi , Li, Jin , Shi, Chenjunhao , Lin, Xinyi , Fu, Yang-Geng . Enhanced Graph Transformer: Multi-scale attention with Heterophilous Curriculum Augmentation . | KNOWLEDGE-BASED SYSTEMS , 2025 , 309 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recent studies have shown that Graph Transformers (GTs) can be effective for specific graph-level tasks. However, when it comes to node classification, training GTs remains challenging, especially in semi-supervised settings with a severe scarcity of labeled data. Our paper aims to address this research gap by focusing on semi-supervised node classification. To accomplish this, we develop a curriculum-enhanced attention distillation method that involves utilizing a Local GT teacher and a Global GT student. Additionally, we introduce the concepts of in-class and out-of-class and then propose two improvements, out-of-class entropy and top-k pruning, to facilitate the student's out-of-class exploration under the teacher's in-class guidance. Taking inspiration from human learning, our method involves a curriculum mechanism for distillation that initially provides strict guidance to the student and gradually allows for more out-of-class exploration by a dynamic balance. Extensive experiments show that our method outperforms many state-of-the-art methods on seven public graph benchmarks, proving its effectiveness. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Y. , Li, J. , Chen, X. et al. TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [未知]. |
MLA | Huang, Y. et al. "TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION" [未知]. |
APA | Huang, Y. , Li, J. , Chen, X. , Fu, Y.-G. . TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [未知]. |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Despite Graph neural networks’ significant performance gain over many classic techniques in various graph-related downstream tasks, their successes are restricted in shallow models due to over-smoothness and the difficulties of optimizations among many other issues. In this paper, to alleviate the over-smoothing issue, we propose a soft graph normalization method to preserve the diversities of node embeddings and prevent indiscrimination due to possible over-closeness. Combined with residual connections, we analyze the reason why the method can effectively capture the knowledge in both input graph structures and node features even with deep networks. Additionally, inspired by Curriculum Learning that learns easy examples before the hard ones, we propose a novel label-smoothing-based learning framework to enhance the optimization of deep GNNs, which iteratively smooths labels in an auxiliary graph and constructs many gradual non-smooth tasks for extracting increasingly complex knowledge and gradually discriminating nodes from coarse to fine. The method arguably reduces the risk of overfitting and generalizes better results. Finally, extensive experiments are carried out to demonstrate the effectiveness and potential of the proposed model and learning framework through comparison with twelve existing baselines including the state-of-the-art methods on twelve real-world node classification benchmarks. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Keyword :
Curricula Curricula Graphic methods Graphic methods Graph neural networks Graph neural networks Graph structures Graph structures Graph theory Graph theory Iterative methods Iterative methods Learning systems Learning systems
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Jin , Zhang, Qirong , Xu, Shuling et al. Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs [C] . 2024 : 13528-13536 . |
MLA | Li, Jin et al. "Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs" . (2024) : 13528-13536 . |
APA | Li, Jin , Zhang, Qirong , Xu, Shuling , Chen, Xinlong , Guo, Longkun , Fu, Yang-Geng . Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs . (2024) : 13528-13536 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recent studies have shown that Graph Transformers (GTs) can be effective for specific graph-level tasks. However, when it comes to node classification, training GTs remains challenging, especially in semi-supervised settings with a severe scarcity of labeled data. Our paper aims to address this research gap by focusing on semi-supervised node classification. To accomplish this, we develop a curriculum-enhanced attention distillation method that involves utilizing a Local GT teacher and a Global GT student. Additionally, we introduce the concepts of in-class and out-of-class and then propose two improvements, out-of-class entropy and top-k pruning, to facilitate the student's out-of-class exploration under the teacher's in-class guidance. Taking inspiration from human learning, our method involves a curriculum mechanism for distillation that initially provides strict guidance to the student and gradually allows for more out-of-class exploration by a dynamic balance. Extensive experiments show that our method outperforms many state-of-the-art methods on seven public graph benchmarks, proving its effectiveness. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.
Keyword :
Curricula Curricula Distillation Distillation Students Students
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Yisong , Li, Jin , Chen, Xinlong et al. TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [C] . 2024 . |
MLA | Huang, Yisong et al. "TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION" . (2024) . |
APA | Huang, Yisong , Li, Jin , Chen, Xinlong , Fu, Yang-Geng . TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION . (2024) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Contrastive learning is a commonly used framework in the field of graph self-supervised learning, where models are trained by bringing positive samples closer together and pushing negative samples apart. Most existing graph contrastive learning models divide all nodes into positive and negative samples, which leads to the selection of some meaningless samples and reduces the model's performance. Additionally, there is a significant disparity in the ratio between positive and negative samples, with an excessive number of negative samples introducing noise. Therefore, we propose a novel dynamic sampling strategy that selects more meaningful samples from the perspectives of structure and features and we incorporate an iteration-based sample selection process into the model training to enhance its performance. Furthermore, we introduce a curriculum learning training method based on the principle of starting from easy to difficult. Sample training for each iteration is treated as a task, enabling the rapid capture of relevant and meaningful sample information. Extensive experiments have been conducted to validate the superior performance of our model across nine real-world datasets.
Keyword :
Curriculum learning Curriculum learning Graph contrastive learning Graph contrastive learning Graph neural networks Graph neural networks K-nearest neighbors K-nearest neighbors Self-supervised Self-supervised
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Yu-Xi , Zhang, Qi-Rong , Li, Jin et al. Curriculum-guided dynamic division strategy for graph contrastive learning [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 300 . |
MLA | Lin, Yu-Xi et al. "Curriculum-guided dynamic division strategy for graph contrastive learning" . | KNOWLEDGE-BASED SYSTEMS 300 (2024) . |
APA | Lin, Yu-Xi , Zhang, Qi-Rong , Li, Jin , Gong, Xiao-Ting , Fu, Yang-Geng . Curriculum-guided dynamic division strategy for graph contrastive learning . | KNOWLEDGE-BASED SYSTEMS , 2024 , 300 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Graph neural networks (GNNs) have achieved excellent performances in many graph-related tasks. However, they need appropriate pooling operations to deal with the graph classification tasks, and thus, they may suffer from some limitations such as information loss and ignorance of the part-whole relationships. CapsGNN is proposed to solve the above-mentioned issues, but suffers from high time and space complexities leading to its poor scalability. In this paper, we propose a novel, effective and efficient graph capsule network called LightCapsGNN. First, we devise a fast voting mechanism (called LightVoting) implemented via linear combinations of K shared transformation matrices to reduce the number of trainable parameters in the voting procedure. Second, an improved reconstruction layer is proposed to encourage our model to capture more informative and essential knowledge of the input graph. Third, other improvements are combined to further accelerate our model, e.g., matrix capsules and a trainable routing mechanism. Finally, extensive experiments are conducted on the popular real-world graph benchmarks in the graph classification tasks and the proposed model can achieve competitive or even better performance compared to ten baselines or state-of-the-art models. Furthermore, compared to other CapsGNNs, the proposed model reduce almost 99%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$99\%$$\end{document} learnable parameters and 31.1%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$31.1\%$$\end{document} running time.
Keyword :
Capsule networks Capsule networks Graph neural networks Graph neural networks Routing Routing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yan, Yucheng , Li, Jin , Xu, Shuling et al. LightCapsGNN: light capsule graph neural network for graph classification [J]. | KNOWLEDGE AND INFORMATION SYSTEMS , 2024 , 66 (10) : 6363-6386 . |
MLA | Yan, Yucheng et al. "LightCapsGNN: light capsule graph neural network for graph classification" . | KNOWLEDGE AND INFORMATION SYSTEMS 66 . 10 (2024) : 6363-6386 . |
APA | Yan, Yucheng , Li, Jin , Xu, Shuling , Chen, Xinlong , Liu, Genggeng , Fu, Yang-Geng . LightCapsGNN: light capsule graph neural network for graph classification . | KNOWLEDGE AND INFORMATION SYSTEMS , 2024 , 66 (10) , 6363-6386 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Despite Graph neural networks' significant performance gain over many classic techniques in various graph-related downstream tasks, their successes are restricted in shallow models due to over-smoothness and the difficulties of optimizations among many other issues. In this paper, to alleviate the over-smoothing issue, we propose a soft graph normalization method to preserve the diversities of node embeddings and prevent indiscrimination due to possible over-closeness. Combined with residual connections, we analyze the reason why the method can effectively capture the knowledge in both input graph structures and node features even with deep networks. Additionally, inspired by Curriculum Learning that learns easy examples before the hard ones, we propose a novel label-smoothing-based learning framework to enhance the optimization of deep GNNs, which iteratively smooths labels in an auxiliary graph and constructs many gradual non-smooth tasks for extracting increasingly complex knowledge and gradually discriminating nodes from coarse to fine. The method arguably reduces the risk of overfitting and generalizes better results. Finally, extensive experiments are carried out to demonstrate the effectiveness and potential of the proposed model and learning framework through comparison with twelve existing baselines including the state-of-the-art methods on twelve real-world node classification benchmarks.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Li, Jin , Zhang, Qirong , Xu, Shuling et al. Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs [J]. | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12 , 2024 : 13528-13536 . |
MLA | Li, Jin et al. "Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs" . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12 (2024) : 13528-13536 . |
APA | Li, Jin , Zhang, Qirong , Xu, Shuling , Chen, Xinlong , Guo, Longkun , Fu, Yang-Geng . Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs . | THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 12 , 2024 , 13528-13536 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |