• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:余春艳

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 14 >
Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction EI
会议论文 | 2023 , 2023-June | 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Abstract&Keyword Cite

Abstract :

High-quality Computed Tomography(CT) plays a vital role in clinical diagnosis, but the presence of metallic implants will introduce severe metal artifacts on CT images and obstruct doctors' decision-making. Many prior researches on Metal Artifact Reduction(MAR) are based on Convolutional Neural Network(CNN). Recently, Transformer has demonstrated phenomenal potential in computer vision. Also, transformer-based methods have been harnessed in CT image denoising. Nevertheless, these methods have been little explored in MAR. To fill the gap, we put forth, to the best of our knowledge, the first transformer-based architecture for MAR. Our method relies on a standard Vision Transformer(ViT). Furthermore, we tap into the progressive tokenization to refrain from the simple tokenization of ViT which gives rise to inability to model the local anatomical information. Additionally, for the sake of facilitating the interaction among tokens, we take advantage of cyclic shift from Swin Transformer. Finally, many experiment results reveal that the transformer-based technique is superior to those on the basis of CNN to some degree. © 2023 IEEE.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Songwei , Zhang, Dong , Yu, Chunyan et al. Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction [C] . 2023 .
MLA Zheng, Songwei et al. "Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction" . (2023) .
APA Zheng, Songwei , Zhang, Dong , Yu, Chunyan , Zhu, Danhong , Zhu, Longlong , Liu, Hao et al. Vision Transformer with Progressive Tokenization for CT Metal Artifact Reduction . (2023) .
Export to NoteExpress RIS BibTex

Version :

SRFS-NET: Few Shot Learning Combined with the Salient Region EI
会议论文 | 2021 , 309-315 | 4th International Conference on Artificial Intelligence and Pattern Recognition, AIPR 2021
Abstract&Keyword Cite

Abstract :

Few shot learning aims to recognize novel categories with only few labeled data in each class. We can utilize it to solve the problem of insufficient samples during training. Recently, many methods based on meta-learning have been proposed in few shot learning and have achieved excellent results. However, unlike the human visual attention mechanism, these methods are weak in filtering critical regions automatically. The main reason is meta-learning usually treats images as black boxes. Therefore, inspired by the human visual attention mechanism, we introduce the salient region into the few shot learning and propose the SRFS-Net. In addition, considering the introduction of the salient region, we also modify the embedding function to improve the feature extraction capabilities of the network. Finally, the experimental results in miniImagenet dataset show that our model performs better in 5-way 1-shot than few shot learning models in recent years. © 2021 ACM.

Keyword :

Behavioral research Behavioral research Embeddings Embeddings

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Ying , Huang, RenJie , Chen, YuJie et al. SRFS-NET: Few Shot Learning Combined with the Salient Region [C] . 2021 : 309-315 .
MLA Li, Ying et al. "SRFS-NET: Few Shot Learning Combined with the Salient Region" . (2021) : 309-315 .
APA Li, Ying , Huang, RenJie , Chen, YuJie , Kang, Da , Yu, ChunYan , Wang, Xiu . SRFS-NET: Few Shot Learning Combined with the Salient Region . (2021) : 309-315 .
Export to NoteExpress RIS BibTex

Version :

Melody Generation with Emotion Constraint EI
会议论文 | 2021 , 1598-1603 | 5th International Conference on Electronic Information Technology and Computer Engineering, EITCE 2021
Abstract&Keyword Cite

Abstract :

At present, most of the melody generation models consider the introduction of chord, rhythm and other constraints in the melody generation process to ensure the quality of the melody generation. While all of them ignore the importance of emotion in melody generation. Music is an emotional art. As the primary part of a piece of music, melody usually has a clear emotional expression. Therefore, it is necessary to introduce emotion information and constraints to generate a melody with clear emotional expression, which means the model should have the ability to learn the relevant characteristics of emotions according to the given information and constraints. To this end, we propose a melody generation model ECMG with emotion constraints. The model takes Generative Adversarial Network (GAN) as the main body, and adds emotion encoder and emotion classifier to introduce emotion information and emotional constraints. We conducted quality evaluation and emotion evaluation of the melody generated by ECMG. In the evaluation of quality, the quality score difference between the melody generated by ECMG and the real melody in the training set is within 0.2, and the quality score of the melody generated by PopMNet is also relatively close. In the evaluation of emotion, the accuracy of emotion classification for both four-category and two-category is much higher than that of completely random probability. These evaluation results show that ECMG can generate melody with specific emotions while ensuring a high quality of generation. © 2021 ACM.

Keyword :

Classification (of information) Classification (of information) Generative adversarial networks Generative adversarial networks Music Music Quality control Quality control

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Renjie , Li, Yin , Kang, Da et al. Melody Generation with Emotion Constraint [C] . 2021 : 1598-1603 .
MLA Huang, Renjie et al. "Melody Generation with Emotion Constraint" . (2021) : 1598-1603 .
APA Huang, Renjie , Li, Yin , Kang, Da , Chen, Yujie , Yu, Chunyan , Wang, Xiu . Melody Generation with Emotion Constraint . (2021) : 1598-1603 .
Export to NoteExpress RIS BibTex

Version :

元结构下的文献网络关系预测 CSCD PKU
期刊论文 | 2020 , 33 (03) , 277-286 | 模式识别与人工智能
Abstract&Keyword Cite Version(1)

Abstract :

针对文献网络节点间的关系预测问题,将节点相似度作为节点间关系概率,采用网络表示学习的方法将文献网络中的节点嵌入到低维空间后计算节点相似度,同时提出基于元结构的网络表示学习模型.根据节点间基于不同元结构的相关性,融合相应的特征表示,将网络映射到低维的特征空间.在低维特征空间内进行距离度量,实现文献网络中的关系预测.实验表明文中模型在文献网络中可得到良好的关系预测结果.

Keyword :

元结构 元结构 关系预测 关系预测 文献网络 文献网络 网络表示学习 网络表示学习

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 王秀 , 陈璐 , 余春艳 . 元结构下的文献网络关系预测 [J]. | 模式识别与人工智能 , 2020 , 33 (03) : 277-286 .
MLA 王秀 et al. "元结构下的文献网络关系预测" . | 模式识别与人工智能 33 . 03 (2020) : 277-286 .
APA 王秀 , 陈璐 , 余春艳 . 元结构下的文献网络关系预测 . | 模式识别与人工智能 , 2020 , 33 (03) , 277-286 .
Export to NoteExpress RIS BibTex

Version :

元结构下的文献网络关系预测 CQVIP CSCD PKU
期刊论文 | 2020 , 33 (3) , 277-286 | 模式识别与人工智能
元元结构下的文献网络关系预测 CSCD PKU
期刊论文 | 2020 , 33 (3) , 277-286 | 模式识别与人工智能
Abstract&Keyword Cite Version(1)

Abstract :

针对文献网络节点间的关系预测问题,将节点相似度作为节点间关系概率,采用网络表示学习的方法将文献网络中的节点嵌入到低维空间后计算节点相似度,同时提出基于元结构的网络表示学习模型.根据节点间基于不同元结构的相关性,融合相应的特征表示,将网络映射到低维的特征空间.在低维特征空间内进行距离度量,实现文献网络中的关系预测.实验表明文中模型在文献网络中可得到良好的关系预测结果.

Keyword :

元结构 元结构 关系预测 关系预测 文献网络 文献网络 网络表示学习 网络表示学习

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 王秀 , 陈璐 , 余春艳 . 元元结构下的文献网络关系预测 [J]. | 模式识别与人工智能 , 2020 , 33 (3) : 277-286 .
MLA 王秀 et al. "元元结构下的文献网络关系预测" . | 模式识别与人工智能 33 . 3 (2020) : 277-286 .
APA 王秀 , 陈璐 , 余春艳 . 元元结构下的文献网络关系预测 . | 模式识别与人工智能 , 2020 , 33 (3) , 277-286 .
Export to NoteExpress RIS BibTex

Version :

Relationship Prediction for Literature Network under Meta-Structure [元结构下的文献网络关系预测] Scopus CSCD PKU
期刊论文 | 2020 , 33 (3) , 277-286 | Pattern Recognition and Artificial Intelligence
鉴别性特征学习模型实现跨摄像头下行人即时对齐 CSCD PKU
期刊论文 | 2019 , 31 (04) , 602-611 | 计算机辅助设计与图形学学报
Abstract&Keyword Cite Version(1)

Abstract :

为解决由于采用延后的关联算法而造成目标错误匹配和子序列漏匹配的问题,提出一种使用鉴别性特征学习模型实现跨摄像头下行人即时对齐的方法.首先基于孪生网络模型整合行人分类和行人身份鉴别模型,仅通过目标行人的单帧信息就可习得具有良好鉴别性的行人外观特征,完成行人相似性值计算;其次提出跨摄像头行人即时对齐模型,根据行人外观、时序和空间3个方面的关联适配度实时建立最小费用流图并求解.实验结果表明,在行人重识别数据集Market-1501和CUHK03上,行人分类和身份鉴别模型的融合能显著提升特征提取的有效性且泛化能力良好,性能全面优于Gate-SCNN与S-LSTM方法;进一步地,在非重叠区域的跨摄像头行...

Keyword :

卷积孪生网络 卷积孪生网络 行人即时对齐 行人即时对齐 鉴别性特征学习模型 鉴别性特征学习模型

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 余春艳 , 钟诗俊 . 鉴别性特征学习模型实现跨摄像头下行人即时对齐 [J]. | 计算机辅助设计与图形学学报 , 2019 , 31 (04) : 602-611 .
MLA 余春艳 et al. "鉴别性特征学习模型实现跨摄像头下行人即时对齐" . | 计算机辅助设计与图形学学报 31 . 04 (2019) : 602-611 .
APA 余春艳 , 钟诗俊 . 鉴别性特征学习模型实现跨摄像头下行人即时对齐 . | 计算机辅助设计与图形学学报 , 2019 , 31 (04) , 602-611 .
Export to NoteExpress RIS BibTex

Version :

鉴别性特征学习模型实现跨摄像头下行人即时对齐 CQVIP CSCD PKU
期刊论文 | 2019 , 31 (4) , 602-611 | 计算机辅助设计与图形学学报
New Knowledge Distillation for Incremental Object Detection CPCI-S
会议论文 | 2019 | International Joint Conference on Neural Networks (IJCNN)
Abstract&Keyword Cite

Abstract :

Nowadays, the Convolutional Neural Network is successfully applied to the images object detection. When new classes of object emerges, it is popular to adapt the convolutional neural network based detection model through a retraining process with the new classes of samples. Unfortunately, the adapted model can only detect the new classes of objects, but cannot identify the old classes of objects, which is called catastrophic forgetting, also occurring in incremental classification tasks. Knowledge distillation has achieved good results in incremental learning for classification tasks. Due to the dual tasks within object detection, object classification and location at the same time, a straightforward migration of knowledge distillation method cannot provide a satisfactory result in incremental learning for object detection tasks. Hence, this paper propose a new knowledge distillation for incremental object detection, which introduces a new object detection distillation loss, a loss not only for classification results but also for location results of the predicted bounding boxes, not only for all final detected regions of interest but also for all intermediate regions proposal. Furthermore, to avoid forgetting learned knowledge from old datasets, this paper not only employs hint learning to retain the characteristic information of the initial model, but also innovatively uses confidence loss to extract the confidence information of the initial model. A series of experiment results on the PASCAL VOC 2007 dataset verify the effectiveness of the proposed method.

Keyword :

catastrophic forgetting catastrophic forgetting incremental learning incremental learning knowledge distillation knowledge distillation object detection object detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Li , Yu, Chunyan , Chen, Lvcai . New Knowledge Distillation for Incremental Object Detection [C] . 2019 .
MLA Chen, Li et al. "New Knowledge Distillation for Incremental Object Detection" . (2019) .
APA Chen, Li , Yu, Chunyan , Chen, Lvcai . New Knowledge Distillation for Incremental Object Detection . (2019) .
Export to NoteExpress RIS BibTex

Version :

A GAN Model With Self-attention Mechanism To Generate Multi-instruments Symbolic Music CPCI-S
会议论文 | 2019 | International Joint Conference on Neural Networks (IJCNN)
WoS CC Cited Count: 45
Abstract&Keyword Cite

Abstract :

GAN has recently been proved to be able to generate symbolic music in the form of piano-rolls. However, those existing GAN-based multi-track music generation methods are always unstable. Moreover, due to defects in the temporal features extraction, the generated multi-track music does not sound natural enough. Therefore, we propose a new GAN model with self-attention mechanism, DMB-GAN, which can extract more temporal features of music to generate multi-instruments music stably. First of all, to generate more consistent and natural single-track music, we introduce self-attention mechanism to enable GAN-based music generation model to extract not only spatial features but also temporal features. Secondly, to generate multi-instruments music with harmonic structure among all tracks, we construct a dual generative adversarial architecture with multi-branches, each branch for one track. Finally, to improve generated quality of multi-instruments symbolic music, we introduce switchable normalization to stabilize network training. The experimental results show that DMB-GAN can stably generate coherent, natural multi-instruments music with good quality.

Keyword :

Generative Adversarial Networks Generative Adversarial Networks multi-instruments multi-instruments self-attention mechanism self-attention mechanism switchable normalization switchable normalization symbolic music generation symbolic music generation

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Guan, Faqian , Yu, Chunyan , Yang, Suqiong . A GAN Model With Self-attention Mechanism To Generate Multi-instruments Symbolic Music [C] . 2019 .
MLA Guan, Faqian et al. "A GAN Model With Self-attention Mechanism To Generate Multi-instruments Symbolic Music" . (2019) .
APA Guan, Faqian , Yu, Chunyan , Yang, Suqiong . A GAN Model With Self-attention Mechanism To Generate Multi-instruments Symbolic Music . (2019) .
Export to NoteExpress RIS BibTex

Version :

Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning CPCI-S
会议论文 | 2019 , 125-132 | Annual Summit and Conference of the Asia-Pacific-Signal-and-Information-Processing-Association (APSIPA ASC)
Abstract&Keyword Cite

Abstract :

With the rapid development of deep learning, although speech conversion had made great progress, there are still rare researches in deep learning to model on singing voice conversion, which is mainly based on statistical methods at present and can only achieve one-to-one conversion with parallel training datasets. So far, its application is limited This paper proposes a generative adversarial learning model, MSVC-GAN, for many-to-many singing voice conversion using non-parallel datasets. First, the generator of our model is concatenated by the singer label, which denotes domain constraint Furthermore, the model integrates self-attention mechanism to capture long-term dependence on the spectral features. Finally, switchable normalization is employed to stabilize network training. Both the objective and subjective evaluation results show that our model achieves the highest similarity and naturalness not only on the parallel speech dataset but also on the non-parallel singing dataset.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Hu, Jinsen , Yu, Chunyan , Guan, Faqian . Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning [C] . 2019 : 125-132 .
MLA Hu, Jinsen et al. "Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning" . (2019) : 125-132 .
APA Hu, Jinsen , Yu, Chunyan , Guan, Faqian . Non-parallel Many-to-many Singing Voice Conversion by Adversarial Learning . (2019) : 125-132 .
Export to NoteExpress RIS BibTex

Version :

Clustering stability-based Evolutionary K-Means SCIE
期刊论文 | 2019 , 23 (1) , 305-321 | SOFT COMPUTING
WoS CC Cited Count: 25
Abstract&Keyword Cite Version(2)

Abstract :

Evolutionary K-Means (EKM), which combines K-Means and genetic algorithm, solves K-Means' initiation problem by selecting parameters automatically through the evolution of partitions. Currently, EKM algorithms usually choose silhouette index as cluster validity index, and they are effective in clustering well-separated clusters. However, their performance of clustering noisy data is often disappointing. On the other hand, clustering stability-based approaches are more robust to noise; yet, they should start intelligently to find some challenging clusters. It is necessary to join EKM with clustering stability-based analysis. In this paper, we present a novel EKM algorithm that uses clustering stability to evaluate partitions. We firstly introduce two weighted aggregated consensus matrices, positive aggregated consensus matrix (PA) and negative aggregated consensus matrix (NA), to store clustering tendency for each pair of instances. Specifically, PA stores the tendency of sharing the same label and NA stores that of having different labels. Based upon the matrices, clusters and partitions can be evaluated from the view of clustering stability. Then, we propose a clustering stability-based EKM algorithm CSEKM that evolves partitions and the aggregated matrices simultaneously. To evaluate the algorithm's performance, we compare it with an EKM algorithm, two consensus clustering algorithms, a clustering stability-based algorithm and a multi-index-based clustering approach. Experimental results on a series of artificial datasets, two simulated datasets and eight UCI datasets suggest CSEKM is more robust to noise.

Keyword :

Clustering Clustering Clustering stability Clustering stability Consensus clustering Consensus clustering Genetic algorithm Genetic algorithm K-Means algorithm K-Means algorithm

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 He, Zhenfeng , Yu, Chunyan . Clustering stability-based Evolutionary K-Means [J]. | SOFT COMPUTING , 2019 , 23 (1) : 305-321 .
MLA He, Zhenfeng et al. "Clustering stability-based Evolutionary K-Means" . | SOFT COMPUTING 23 . 1 (2019) : 305-321 .
APA He, Zhenfeng , Yu, Chunyan . Clustering stability-based Evolutionary K-Means . | SOFT COMPUTING , 2019 , 23 (1) , 305-321 .
Export to NoteExpress RIS BibTex

Version :

Clustering stability-based Evolutionary K-Means EI
期刊论文 | 2019 , 23 (1) , 305-321 | Soft Computing
Clustering stability-based Evolutionary K-Means Scopus
期刊论文 | 2019 , 23 (1) , 305-321 | Soft Computing
10| 20| 50 per page
< Page ,Total 14 >

Export

Results:

Selected

to

Format:
Online/Total:229/7293872
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1