• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:傅仰耿

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 10 >
面向中文小样本命名实体识别的BERT优化方法
期刊论文 | 2025 , 46 (3) , 602-611 | 小型微型计算机系统
Abstract&Keyword Cite

Abstract :

为解决中文小样本命名实体识别(NER)任务所面临的问题和挑战,提出了一种面向中文小样本NER的BERT优化方法,该方法包含两方面的优化:首先,针对训练样本数量不足限制了预训练语言模型BERT的语义感知能力的问题,提出了 Pro-ConBERT,一种基于提示学习与对比学习的BERT预训练策略.在提示学习阶段,设计掩码填充模板来训练BERT预测出每个标记对应的中文标签词.在对比学习阶段,利用引导模板训练BERT学习每个标记和标签词之间的相似性与差异性.其次,针对中文缺乏明确的词边界所带来的复杂性和挑战性,修改BERT模型的第一层Transformer结构,并设计了一种带有混合权重引导器的特征融合模块,将词典信息集成到BERT底层中.最后,实验结果验证了所提方法在中文小样本NER任务中的有效性与优越性.该方法结合BERT和条件随机场(CRF)结构,在4个采样的中文NER数据集上取得了最好的性能.特别是在Weibo数据集的3个小样本场景下,模型的F1值分别达到了 63.78%、66.27%、70.90%,与其他方法相比,平均F1值分别提高了16.28%、14.30%、11.20%.此外,将ProConBERT应用到多个基于BERT的中文NER模型中能进一步提升实体识别的性能.

Keyword :

BERT模型 BERT模型 中文小样本命名实体识别 中文小样本命名实体识别 对比学习 对比学习 提示学习 提示学习 特征融合 特征融合 预训练 预训练

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 杨三和 , 赖沛超 , 傅仰耿 et al. 面向中文小样本命名实体识别的BERT优化方法 [J]. | 小型微型计算机系统 , 2025 , 46 (3) : 602-611 .
MLA 杨三和 et al. "面向中文小样本命名实体识别的BERT优化方法" . | 小型微型计算机系统 46 . 3 (2025) : 602-611 .
APA 杨三和 , 赖沛超 , 傅仰耿 , 王一蕾 , 叶飞扬 , 张林 . 面向中文小样本命名实体识别的BERT优化方法 . | 小型微型计算机系统 , 2025 , 46 (3) , 602-611 .
Export to NoteExpress RIS BibTex

Version :

注意力融合机制和拓扑关系挖掘的异构图神经网络
期刊论文 | 2025 , 53 (1) , 1-9 | 福州大学学报(自然科学版)
Abstract&Keyword Cite

Abstract :

针对异构图神经网络模型依赖元路径和复杂聚合操作导致元路径受限与高成本的不足,提出一种基于注意力融合机制和拓扑关系挖掘的异构图神经网络模型(FTHGNN).该模型首先使用一种轻量级的注意力融合机制,融合全局关系信息和局部节点信息,以较低的时空开销实现更有效的消息聚合;接着使用一种无需先验知识的拓扑关系挖掘方法替代元路径方法,挖掘图上的高阶邻居关系,并引入对比学习捕获图上的高阶语义信息;最后,在4个广泛使用的现实世界异构图数据集上进行的充分实验,验证了 FTHGNN简单而高效,在分类预测准确率上超越了绝大多数现有模型.

Keyword :

图神经网络 图神经网络 对比学习 对比学习 异构图 异构图 注意力机制 注意力机制

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 陈金杰 , 王一蕾 , 傅仰耿 . 注意力融合机制和拓扑关系挖掘的异构图神经网络 [J]. | 福州大学学报(自然科学版) , 2025 , 53 (1) : 1-9 .
MLA 陈金杰 et al. "注意力融合机制和拓扑关系挖掘的异构图神经网络" . | 福州大学学报(自然科学版) 53 . 1 (2025) : 1-9 .
APA 陈金杰 , 王一蕾 , 傅仰耿 . 注意力融合机制和拓扑关系挖掘的异构图神经网络 . | 福州大学学报(自然科学版) , 2025 , 53 (1) , 1-9 .
Export to NoteExpress RIS BibTex

Version :

GHPN:面向半监督小样本节点分类的图超球面原型网络
期刊论文 | 2025 , 46 (3) , 542-551 | 小型微型计算机系统
Abstract&Keyword Cite

Abstract :

图神经网络已经成功应用于各种与图相关的任务中.以有监督的方式训练一个图神经网络需要大量标签,而现实世界中受到成本制约难以获取大量标签,因此在小样本学习或半监督学习场景的标签就更为稀少.为了克服这个问题,许多方法通过标签传播的方法来估计标签,但通常会受到图上连接性和同质性假设的限制,容易生成带有噪声的伪标签.为了解决这些限制,本文提出了一个名为图超球面原型网络的新方法GHPN,专注于半监督小样本节点分类.为了减轻图结构对预测结果的影响,GHPN在超球面表示空间中建模类别表示,通过类级别表示在语义空间中传播标签信息.此外,为了利用未标记节点的监督信息,本文设计了一个基于原型网络预测结果的负学习框架,用于补充监督信号,调整各类别原型之间的距离.在5个真实世界的数据集上进行的实验表明,该方法与10个最先进的方法相比能够有效提高性能,在4个数据集上能取得平均排名最佳结果.

Keyword :

半监督学习 半监督学习 原型网络 原型网络 图表示学习 图表示学习 小样本学习 小样本学习 负学习 负学习

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 徐祖豪 , 陈鑫龙 , 李进 et al. GHPN:面向半监督小样本节点分类的图超球面原型网络 [J]. | 小型微型计算机系统 , 2025 , 46 (3) : 542-551 .
MLA 徐祖豪 et al. "GHPN:面向半监督小样本节点分类的图超球面原型网络" . | 小型微型计算机系统 46 . 3 (2025) : 542-551 .
APA 徐祖豪 , 陈鑫龙 , 李进 , 黄益颂 , 傅仰耿 . GHPN:面向半监督小样本节点分类的图超球面原型网络 . | 小型微型计算机系统 , 2025 , 46 (3) , 542-551 .
Export to NoteExpress RIS BibTex

Version :

Enhanced Graph Transformer: Multi-scale attention with Heterophilous Curriculum Augmentation SCIE
期刊论文 | 2025 , 309 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite

Abstract :

Graph representation learning is a crucial area in machine learning, with widespread applications in social networks, recommendation systems, and traffic flow prediction. Recently, Graph Transformers have emerged as powerful tools for this purpose, garnering significant attention. In this work, we observe a fundamental issue of previous Graph Transformers that they overlook the scale-related information gap and often employ an identical attention computation method for different-scale node interactions, leading to suboptimality of model performance. To address this, we propose a Multi-Scale Attention Graph Transformer (MSA-GT) that enables each node to conduct adaptive interactions conditioned on different scales from both local and global perspectives. Specifically, MSA-GT guides several attention mechanisms to focus on individual scales and then perform customized combinations via an attention-based fusion module, thereby obtaining much more semantically fine-grained node representations. Despite the potential of the above design, we still observe over- fitting to some extent, which is atypical challenge for training Graph Transformers. We propose two additional technical components to prevent over-fitting and improve the performance further. We first introduce a path- based pruning strategy to reduce ineffective attention interactions, facilitating more accurate relevant node selection. Additionally, we propose a Heterophilous Curriculum Augmentation (HCA) module, which gradually increases the training difficulty, forming a weak-to-strong regularization schema and therefore enhancing the model's generalization ability step-by-step. Extensive experiments show that our method outperforms many state-of-the-art methods on eight public graph benchmarks, proving its effectiveness.

Keyword :

Curriculum learning Curriculum learning Graph Transformer Graph Transformer Multi-scale attention Multi-scale attention Node classification Node classification Representation learning Representation learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhuang, Jianzhi , Li, Jin , Shi, Chenjunhao et al. Enhanced Graph Transformer: Multi-scale attention with Heterophilous Curriculum Augmentation [J]. | KNOWLEDGE-BASED SYSTEMS , 2025 , 309 .
MLA Zhuang, Jianzhi et al. "Enhanced Graph Transformer: Multi-scale attention with Heterophilous Curriculum Augmentation" . | KNOWLEDGE-BASED SYSTEMS 309 (2025) .
APA Zhuang, Jianzhi , Li, Jin , Shi, Chenjunhao , Lin, Xinyi , Fu, Yang-Geng . Enhanced Graph Transformer: Multi-scale attention with Heterophilous Curriculum Augmentation . | KNOWLEDGE-BASED SYSTEMS , 2025 , 309 .
Export to NoteExpress RIS BibTex

Version :

Exploiting negative correlation for unsupervised anomaly detection in contaminated time series SCIE
期刊论文 | 2024 , 249 | EXPERT SYSTEMS WITH APPLICATIONS
Abstract&Keyword Cite

Abstract :

Anomaly detection in time series data is crucial for many fields such as healthcare, meteorology, and industrial fault detection. However, traditional unsupervised time series anomaly detection methods suffer from biased anomaly measurement under contaminated training data. Most of existing methods employ hard strategies for contamination calibration by assigning pseudo -label to training data. These hard strategies rely on threshold selection and result in suboptimal performance. To address this problem, in this paper, we propose a novel unsupervised anomaly detection framework for contaminated time series (NegCo), which builds an effective soft contamination calibration strategy by exploiting the observed negative correlation between semantic representation and anomaly detection inherent within the autoencoder framework. We innovatively redefine anomaly detection in data contamination scenarios as an optimization problem rooted in this negative correlation. To model this negative correlation, we introduce a dual construct: morphological similarity captures semantic distinctions relevant to normality, while reconstruction consistency quantifies deviations indicative of anomalies. Firstly, the morphological similarity is effectively measured based on the representative normal samples generated from the center of the learned Gaussian distribution. Then, an anomaly measurement calibration loss function is designed based on negative correlation between morphological similarity and reconstruction consistency, to calibrate the biased anomaly measurement caused by contaminated samples. Extensive experiments on various time series datasets show that the proposed NegCo outperforms stateof-the-art baselines, achieving an improvement of 6.2% to 26.8% in Area Under the Receiver Operating Characteristics (AUROC) scores, particularly in scenarios with heavily contaminated training data.

Keyword :

Anomaly detection Anomaly detection Data contamination Data contamination Negative correlation Negative correlation Time series Time series

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Xiaohui , Li, Zuoyong , Fan, Haoyi et al. Exploiting negative correlation for unsupervised anomaly detection in contaminated time series [J]. | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 249 .
MLA Lin, Xiaohui et al. "Exploiting negative correlation for unsupervised anomaly detection in contaminated time series" . | EXPERT SYSTEMS WITH APPLICATIONS 249 (2024) .
APA Lin, Xiaohui , Li, Zuoyong , Fan, Haoyi , Fu, Yanggeng , Chen, Xinwei . Exploiting negative correlation for unsupervised anomaly detection in contaminated time series . | EXPERT SYSTEMS WITH APPLICATIONS , 2024 , 249 .
Export to NoteExpress RIS BibTex

Version :

Curriculum-guided dynamic division strategy for graph contrastive learning SCIE
期刊论文 | 2024 , 300 | KNOWLEDGE-BASED SYSTEMS
Abstract&Keyword Cite

Abstract :

Contrastive learning is a commonly used framework in the field of graph self-supervised learning, where models are trained by bringing positive samples closer together and pushing negative samples apart. Most existing graph contrastive learning models divide all nodes into positive and negative samples, which leads to the selection of some meaningless samples and reduces the model's performance. Additionally, there is a significant disparity in the ratio between positive and negative samples, with an excessive number of negative samples introducing noise. Therefore, we propose a novel dynamic sampling strategy that selects more meaningful samples from the perspectives of structure and features and we incorporate an iteration-based sample selection process into the model training to enhance its performance. Furthermore, we introduce a curriculum learning training method based on the principle of starting from easy to difficult. Sample training for each iteration is treated as a task, enabling the rapid capture of relevant and meaningful sample information. Extensive experiments have been conducted to validate the superior performance of our model across nine real-world datasets.

Keyword :

Curriculum learning Curriculum learning Graph contrastive learning Graph contrastive learning Graph neural networks Graph neural networks K-nearest neighbors K-nearest neighbors Self-supervised Self-supervised

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Yu-Xi , Zhang, Qi-Rong , Li, Jin et al. Curriculum-guided dynamic division strategy for graph contrastive learning [J]. | KNOWLEDGE-BASED SYSTEMS , 2024 , 300 .
MLA Lin, Yu-Xi et al. "Curriculum-guided dynamic division strategy for graph contrastive learning" . | KNOWLEDGE-BASED SYSTEMS 300 (2024) .
APA Lin, Yu-Xi , Zhang, Qi-Rong , Li, Jin , Gong, Xiao-Ting , Fu, Yang-Geng . Curriculum-guided dynamic division strategy for graph contrastive learning . | KNOWLEDGE-BASED SYSTEMS , 2024 , 300 .
Export to NoteExpress RIS BibTex

Version :

Boosting Accuracy of Differentially Private Continuous Data Release for Federated Learning SCIE
期刊论文 | 2024 , 19 , 10287-10301 | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
Abstract&Keyword Cite

Abstract :

Incorporating differentially private continuous data release (DPCR) into private federated learning (FL) has recently emerged as a powerful technique for enhancing accuracy. Designing an effective DPCR model is the key to improving accuracy. Still, the state-of-the-art DPCR models hinder the potential for accuracy improvement due to insufficient privacy budget allocation and the design only for specific iteration numbers. To boost accuracy further, we develop an augmented BIT-based continuous data release (AuBCR) model, leading to demonstrable accuracy enhancements. By employing a dual-release strategy, AuBCR gains the potential to further improve accuracy, while confronting the challenge of consistent release and doubly-nested complex privacy budget allocation problem. Against this, we design an efficient optimal consistent estimation algorithm with only O(1) complexity per release. Subsequently, we introduce the (k, N)-AuBCR Model concept and design a meta-factor method. This innovation significantly reduces the optimization variables from O(T) to O (lg(2)T), thereby greatly enhancing the solvability of optimal privacy budget allocation and simultaneously supporting arbitrary iteration number T . Our experiments on classical datasets show that AuBCR boosts accuracy by 4.9% similar to 18.1% compared to traditional private FL and 0.4% similar to 1.2% compared to the state-of-the-art ABCRG model.

Keyword :

binary indexed tree binary indexed tree continuous data release continuous data release differential privacy differential privacy Federated learning Federated learning matrix mechanism matrix mechanism

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cai, Jianping , Ye, Qingqing , Hu, Haibo et al. Boosting Accuracy of Differentially Private Continuous Data Release for Federated Learning [J]. | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 : 10287-10301 .
MLA Cai, Jianping et al. "Boosting Accuracy of Differentially Private Continuous Data Release for Federated Learning" . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY 19 (2024) : 10287-10301 .
APA Cai, Jianping , Ye, Qingqing , Hu, Haibo , Liu, Ximeng , Fu, Yanggeng . Boosting Accuracy of Differentially Private Continuous Data Release for Federated Learning . | IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY , 2024 , 19 , 10287-10301 .
Export to NoteExpress RIS BibTex

Version :

DWSSA: Alleviating over-smoothness for deep Graph Neural Networks SCIE
期刊论文 | 2024 , 174 | NEURAL NETWORKS
Abstract&Keyword Cite

Abstract :

Graph Neural Networks (GNNs) have demonstrated great potential in achieving outstanding performance in various graph -related tasks, e.g., graph classification and link prediction. However, most of them suffer from the following issue: shallow networks capture very limited knowledge. Prior works design deep GNNs with more layers to solve the issue, which however introduces a new challenge, i.e., the infamous oversmoothness. Graph representation over emphasizes node features but only considers the static graph structure with a uniform weight are the key reasons for the over -smoothness issue. To alleviate the issue, this paper proposes a Dynamic Weighting Strategy (DWS) for addressing over -smoothness. We first employ Fuzzy CMeans (FCM) to cluster all nodes into several groups and get each node's fuzzy assignment, based on which a novel metric function is devised for dynamically adjusting the aggregation weights. This dynamic weighting strategy not only enables the intra-cluster interactions, but also inter -cluster aggregations, which well addresses undifferentiated aggregation caused by uniform weights. Based on DWS, we further design a Structure Augmentation (SA) step for addressing the issue of underutilizing the graph structure, where some potentially meaningful connections (i.e., edges) are added to the original graph structure via a parallelable KNN algorithm. In general, the optimized Dynamic Weighting Strategy with Structure Augmentation (DWSSA) alleviates over -smoothness by reducing noisy aggregations and utilizing topological knowledge. Extensive experiments on eleven homophilous or heterophilous graph benchmarks demonstrate the effectiveness of our proposed method DWSSA in alleviating over -smoothness and enhancing deep GNNs performance.

Keyword :

Clustering Clustering Deep graph neural networks Deep graph neural networks Node classification Node classification Over-smoothness Over-smoothness Structure augmentation Structure augmentation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Qirong , Li, Jin , Ye, Qingqing et al. DWSSA: Alleviating over-smoothness for deep Graph Neural Networks [J]. | NEURAL NETWORKS , 2024 , 174 .
MLA Zhang, Qirong et al. "DWSSA: Alleviating over-smoothness for deep Graph Neural Networks" . | NEURAL NETWORKS 174 (2024) .
APA Zhang, Qirong , Li, Jin , Ye, Qingqing , Lin, Yuxi , Chen, Xinlong , Fu, Yang-Geng . DWSSA: Alleviating over-smoothness for deep Graph Neural Networks . | NEURAL NETWORKS , 2024 , 174 .
Export to NoteExpress RIS BibTex

Version :

Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs EI
会议论文 | 2024 , 38 (12) , 13528-13536 | 38th AAAI Conference on Artificial Intelligence, AAAI 2024
Abstract&Keyword Cite

Abstract :

Despite Graph neural networks’ significant performance gain over many classic techniques in various graph-related downstream tasks, their successes are restricted in shallow models due to over-smoothness and the difficulties of optimizations among many other issues. In this paper, to alleviate the over-smoothing issue, we propose a soft graph normalization method to preserve the diversities of node embeddings and prevent indiscrimination due to possible over-closeness. Combined with residual connections, we analyze the reason why the method can effectively capture the knowledge in both input graph structures and node features even with deep networks. Additionally, inspired by Curriculum Learning that learns easy examples before the hard ones, we propose a novel label-smoothing-based learning framework to enhance the optimization of deep GNNs, which iteratively smooths labels in an auxiliary graph and constructs many gradual non-smooth tasks for extracting increasingly complex knowledge and gradually discriminating nodes from coarse to fine. The method arguably reduces the risk of overfitting and generalizes better results. Finally, extensive experiments are carried out to demonstrate the effectiveness and potential of the proposed model and learning framework through comparison with twelve existing baselines including the state-of-the-art methods on twelve real-world node classification benchmarks. Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Keyword :

Curricula Curricula Graphic methods Graphic methods Graph neural networks Graph neural networks Graph structures Graph structures Graph theory Graph theory Iterative methods Iterative methods Learning systems Learning systems

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Jin , Zhang, Qirong , Xu, Shuling et al. Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs [C] . 2024 : 13528-13536 .
MLA Li, Jin et al. "Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs" . (2024) : 13528-13536 .
APA Li, Jin , Zhang, Qirong , Xu, Shuling , Chen, Xinlong , Guo, Longkun , Fu, Yang-Geng . Curriculum-Enhanced Residual Soft An-Isotropic Normalization for Over-Smoothness in Deep GNNs . (2024) : 13528-13536 .
Export to NoteExpress RIS BibTex

Version :

TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION Scopus
其他 | 2024
Abstract&Keyword Cite

Abstract :

Recent studies have shown that Graph Transformers (GTs) can be effective for specific graph-level tasks. However, when it comes to node classification, training GTs remains challenging, especially in semi-supervised settings with a severe scarcity of labeled data. Our paper aims to address this research gap by focusing on semi-supervised node classification. To accomplish this, we develop a curriculum-enhanced attention distillation method that involves utilizing a Local GT teacher and a Global GT student. Additionally, we introduce the concepts of in-class and out-of-class and then propose two improvements, out-of-class entropy and top-k pruning, to facilitate the student's out-of-class exploration under the teacher's in-class guidance. Taking inspiration from human learning, our method involves a curriculum mechanism for distillation that initially provides strict guidance to the student and gradually allows for more out-of-class exploration by a dynamic balance. Extensive experiments show that our method outperforms many state-of-the-art methods on seven public graph benchmarks, proving its effectiveness. © 2024 12th International Conference on Learning Representations, ICLR 2024. All rights reserved.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Y. , Li, J. , Chen, X. et al. TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [未知].
MLA Huang, Y. et al. "TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION" [未知].
APA Huang, Y. , Li, J. , Chen, X. , Fu, Y.-G. . TRAINING GRAPH TRANSFORMERS VIA CURRICULUM-ENHANCED ATTENTION DISTILLATION [未知].
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 10 >

Export

Results:

Selected

to

Format:
Online/Total:330/10767910
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1