Query:
学者姓名:叶少珍
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
Global pandemics such as COVID-19 have resulted in significant global social and economic disruption.Although polymerase chain reaction(PCR)is recommended as the standard test for identifying the SARS-CoV-2,conven-tional assays are time-consuming.In parallel,although artificial intelligence(AI)has been employed to contain the disease,the implementation of AI in PCR analytics,which may enhance the cognition of diagnostics,is quite rare.The information that the amplification curve reveals can reflect the dynamics of reactions.Here,we present a novel AI-aided on-chip approach by integrating deep learning with microfluidic paper-based analytical devices(μPADs)to detect synthetic RNA templates of the SARS-CoV-2 ORFlab gene.The μPADs feature a multilayer structure by which the devices are compatible with conventional PCR instruments.During analysis,real-time PCR data were synchronously fed to three unsupervised learning models with deep neural networks,including RNN,LSTM,and GRU.Of these,the GRU is found to be most effective and accurate.Based on the experimentally obtained datasets,qualitative forecasting can be made as early as 13 cycles,which significantly enhances the efficiency of the PCR tests by 67.5%(~40 min).Also,an accurate prediction of the end-point value of PCR curves can be obtained by GRU around 20 cycles.To further improve PCR testing efficiency,we also propose AI-aided dynamic evaluation criteria for determining critical cycle numbers,which enables real-time quantitative analysis of PCR tests.The presented approach is the first to integrate Al for on-chip PCR data analysis.It is capable of forecasting the final output and the trend of qPCR in addition to the conventional end-point Cq calculation.It is also capable of fully exploring the dynamics and intrinsic features of each reaction.This work leverages method-ologies from diverse disciplines to provide perspectives and insights beyond the scope of a single scientific field.It is universally applicable and can be extended to multiple areas of fundamental research.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Hao Sun , Linghu Xiong , Yi Huang et al. AI-aided on-chip nucleic acid assay for smart diagnosis of infectious disease [J]. | 自然科学基础研究(英文) , 2022 , 2 (3) : 476-486 . |
MLA | Hao Sun et al. "AI-aided on-chip nucleic acid assay for smart diagnosis of infectious disease" . | 自然科学基础研究(英文) 2 . 3 (2022) : 476-486 . |
APA | Hao Sun , Linghu Xiong , Yi Huang , Xinkai Chen , Yongjian Yu , Shaozhen Ye et al. AI-aided on-chip nucleic acid assay for smart diagnosis of infectious disease . | 自然科学基础研究(英文) , 2022 , 2 (3) , 476-486 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
经典UPSNet已经取得了较好的全景分割效果,但是使用了一种单向信息流动的特征金字塔网络,存在实例分支的目标实例定位不够准确的问题,并且语义分支的语义分割能力还需进一步提升.为此,通过考虑两个任务的差异性以及共性,重新设计特征金字塔网络结构以提取出更适合全景分割的特征图,从而提高实例分支的AP评价指标.在语义分支中引入克罗内克卷积,与可变形卷积进行融合使得特征图的感受野更大并且捕获了局部信息,使语义分支的mIoU评价指标得到了提高.此模型在Cityscapes数据集上进行实验,验证了所设计的每个模块及整个模型的有效性.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 薛程 , 叶少珍 . 多尺度定位信息增强的图像全景分割方法 [J]. | 福州大学学报(自然科学版) , 2021 , 49 (3) : 302-308 . |
MLA | 薛程 et al. "多尺度定位信息增强的图像全景分割方法" . | 福州大学学报(自然科学版) 49 . 3 (2021) : 302-308 . |
APA | 薛程 , 叶少珍 . 多尺度定位信息增强的图像全景分割方法 . | 福州大学学报(自然科学版) , 2021 , 49 (3) , 302-308 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
超分辨率生成对抗网络(SRGAN)的高分辨率图像质量较传统方法有明显提升,然而其存在训练过程不稳定、图像浅层特征未充分使用等问题,很大程度上影响生成图像的质量.为此,提出一种特征增强改进的SRGAN模型,使用信息蒸馏块.通过对长短途特征在图像通道上的拼接增强特征纹理信息,利用压缩单元消除图像特征中的冗余信息.此外,使用相对平均鉴别器替代原始SRGAN中的二分类鉴别器,保证生成对抗网络训练的稳定性.本研究基于4倍放大因子进行超分辨重建任务,并在BSD100和SET14数据集上进行实验结果的质化和量化评价.实验表明,该方法较之SRGAN在训练过程中具有更好的稳定性,生成的图像具有更清晰的细节纹理,取得了更佳的图像超分辨率重建效果.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 陈波 , 翁谦 , 叶少珍 . 改进生成对抗网络的图像超分辨率重建算法 [J]. | 福州大学学报(自然科学版) , 2021 , 49 (3) : 295-301 . |
MLA | 陈波 et al. "改进生成对抗网络的图像超分辨率重建算法" . | 福州大学学报(自然科学版) 49 . 3 (2021) : 295-301 . |
APA | 陈波 , 翁谦 , 叶少珍 . 改进生成对抗网络的图像超分辨率重建算法 . | 福州大学学报(自然科学版) , 2021 , 49 (3) , 295-301 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
在比较现有图像重定向技术方法基础上,针对传统Seam Carving方法,其图像边缘重要度图的表征内容感知存在忽视了图像内容主体与边缘关系的问题,提出一种双向接缝裁剪的图像重定向改进方法.该方法通过结合检测边缘的梯度能量图和检测内容的显著性图,突出重要对象主体和保护重要边缘信息,同时延缓裁剪细缝穿过重要信息的可能性,能够保持重定向图像的重要部分与图像整体之间的布局合理关系.以SIFT Flow算法作为质量评估参考,利用实验数据集和多组对比实验,验证了本研究图像重定向改进方法具有良好的视觉改善效果.
Keyword :
双向接缝裁剪 双向接缝裁剪 图像重定向 图像重定向 显著性 显著性 梯度能量 梯度能量
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 陈加玲 , 叶少珍 . 基于Seam Carving的双向接缝裁剪图像重定向 [J]. | 福州大学学报(自然科学版) , 2021 , 49 (2) : 163-169 . |
MLA | 陈加玲 et al. "基于Seam Carving的双向接缝裁剪图像重定向" . | 福州大学学报(自然科学版) 49 . 2 (2021) : 163-169 . |
APA | 陈加玲 , 叶少珍 . 基于Seam Carving的双向接缝裁剪图像重定向 . | 福州大学学报(自然科学版) , 2021 , 49 (2) , 163-169 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
针对低照度图像增强问题,提出一种基于生成式对抗网络(generative adversarial networks, GAN)的循环式图像增强网络.引入无监督学习方式,通过降低循环一致性损失和对抗性损失,估计低照度图像的原始光照图;利用建立的图像增强模型公式,对光照不足环境下采集到的图像进行亮度等方面的增强.在人工合成低照度图像数据集和真实自然低照度图像数据集上,均进行了质化和量化评价.实验表明,与现有的一些图像增强方法相比,该方法具有更好的图像增强效果,能够由低照度图像复原出生动、清晰、直观、自然的高质量图像.
Keyword :
卷积神经网络 卷积神经网络 图像增强 图像增强 图像复原 图像复原 生成式对抗网络 生成式对抗网络
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 黄路遥 , 叶少珍 . 基于GAN的低照度图像增强算法研究 [J]. | 福州大学学报(自然科学版) , 2020 , 48 (05) : 551-557 . |
MLA | 黄路遥 et al. "基于GAN的低照度图像增强算法研究" . | 福州大学学报(自然科学版) 48 . 05 (2020) : 551-557 . |
APA | 黄路遥 , 叶少珍 . 基于GAN的低照度图像增强算法研究 . | 福州大学学报(自然科学版) , 2020 , 48 (05) , 551-557 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
传统高空间分辨率遥感影像(简称"高分遥感影像")分类方法的"同物异谱"、"异物同谱"现象较为严重,深度学习方法为高分遥感影像分类提出了一种新的解决方案.然而,遥感影像训练样本少容易导致网络过拟合现象的发生.利用深度学习方法,结合迁移学习策略,提出了一种改进的Inception-V3的遥感图像场景分类模型.首先在原始Inception-V3模型的全连接层之前添加Dropout层,以进一步避免过拟合现象的发生;训练过程中采用迁移学习策略,充分利用已有模型及知识,提高训练效率.基于AID和NWPU-RESISC45两个大型高分遥感场景影像的实验结果表明,改进的Inception-V3较原始的Inception-V3训练收敛速度更快,训练效果更平稳;与其他传统方法和深度学习网络相比,本文提出的模型的分类精度也有较大的提升,验证了该模型的有效性.
Keyword :
Inception-V3 Inception-V3 卷积神经网络 卷积神经网络 场景分类 场景分类 深度学习 深度学习 迁移学习 迁移学习 遥感图像分类 遥感图像分类
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 蔡之灵 , 翁谦 , 叶少珍 et al. 基于Inception-V3模型的高分遥感影像场景分类 [J]. | 国土资源遥感 , 2020 , 32 (3) : 80-89 . |
MLA | 蔡之灵 et al. "基于Inception-V3模型的高分遥感影像场景分类" . | 国土资源遥感 32 . 3 (2020) : 80-89 . |
APA | 蔡之灵 , 翁谦 , 叶少珍 , 简彩仁 . 基于Inception-V3模型的高分遥感影像场景分类 . | 国土资源遥感 , 2020 , 32 (3) , 80-89 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
With the deepening research and cross-fusion in the modern remote sensing image area, the classification of high spatial resolution remote sensing images has captured the attention of the researchers in the field of remote sensing. However, due to the serious phenomenon of same object, different spectrum and same spectrum, different object of high-resolution remote sensing image, the traditional classification strategy is hard to handle this challenge. In this paper, a remote sensing image scene classification model based on SENet and Inception-V3 is proposed by utilizing the deep learning method and transfer learning strategy. The model first adds a dropout layer before the full connection layer of the original Inception-V3 model to avoid over-fitting. Then we embed the SENet module into the Inception-V3 model for optimizing the network performance. In this paper, global average pooling is used as squeeze operation, and then two fully connected layers are used to construct a bottleneck structure. The model proposed in this paper is more non-linear, can better fit the complex correlation between channels, and greatly reduces the amount of parameters and computation. In the training process, this paper adopts the transfer learning strategy, makes full use of existing models and knowledge, improves training efficiency, and finally obtains scene classification results. The experimental results based on AID high-score remote sensing scene images show that SE-Inception has faster convergence speed and more stable training effect than the original Inception-V3 training. Compared with other traditional methods and deep learning networks, the improved model proposed in this paper has greater accuracy improvement. © 2020 Z. L. Cai et al.
Keyword :
Deep learning Deep learning Image classification Image classification Learning systems Learning systems Remote sensing Remote sensing Transfer learning Transfer learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Cai, Z.L. , Weng, Q. , Ye, S.Z. . RESEARCH on SE-INCEPTION in HIGH-RESOLUTION REMOTE SENSING IMAGE CLASSIFICATION [C] . 2020 : 539-545 . |
MLA | Cai, Z.L. et al. "RESEARCH on SE-INCEPTION in HIGH-RESOLUTION REMOTE SENSING IMAGE CLASSIFICATION" . (2020) : 539-545 . |
APA | Cai, Z.L. , Weng, Q. , Ye, S.Z. . RESEARCH on SE-INCEPTION in HIGH-RESOLUTION REMOTE SENSING IMAGE CLASSIFICATION . (2020) : 539-545 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Image dehazing is a crucial image processing step for outdoor vision systems. However, images recovered through conventional image dehazing methods that use either haze-relevant priors or heuristic cues to estimate transmission maps may not lead to sufficiently accurate haze removal from single images. The most commonly observed effects are darkened and brightened artifacts on some areas of the recovered images, which cause considerable loss of fidelity, brightness, and sharpness. This paper develops a variational image dehazing method on the basis of a color-transfer image dehazing model that is superior to conventional image dehazing methods. By creating a color-transfer image dehazing model to remove haze obscuration and acquire information regarding the coefficients of the model by using the devised convolutional neural network-based deep framework as a supervised learning strategy, an image fidelity, brightness, and sharpness can be effectively restored. The experimental results verify through quantitative and qualitative evaluations of either synthesized or real haze images, and the proposed method outperforms existing single image dehazing methods.
Keyword :
color transfer color transfer deep learning deep learning Image dehazing Image dehazing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Yin, Jia-Li , Huang, Yi-Chi , Chen, Bo-Hao et al. Color Transferred Convolutional Neural Networks for Image Dehazing [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2020 , 30 (11) : 3957-3967 . |
MLA | Yin, Jia-Li et al. "Color Transferred Convolutional Neural Networks for Image Dehazing" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 30 . 11 (2020) : 3957-3967 . |
APA | Yin, Jia-Li , Huang, Yi-Chi , Chen, Bo-Hao , Ye, Shao-Zhen . Color Transferred Convolutional Neural Networks for Image Dehazing . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2020 , 30 (11) , 3957-3967 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Deep learning computation is often used in single-image de-hazing techniques for outdoor vision systems. Its development is restricted by the difficulties in providing a training set of degraded and ground-truth image pairs. In this paper, we develop a novel model that utilizes cycle generative adversarial network through unsupervised learning to effectively remove the requirement of a haze/depth data set. Qualitative and quantitative experiments demonstrated that the proposed model outperforms existing state-of-the-art dehazing models when tested on both synthetic and real haze images. © 2019 IEEE.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Lu-Yao , Yin, Jia-Li , Chen, Bo-Hao et al. Towards Unsupervised Single Image Dehazing with Deep Learning [C] . 2019 : 2741-2745 . |
MLA | Huang, Lu-Yao et al. "Towards Unsupervised Single Image Dehazing with Deep Learning" . (2019) : 2741-2745 . |
APA | Huang, Lu-Yao , Yin, Jia-Li , Chen, Bo-Hao , Ye, Shao-Zhen . Towards Unsupervised Single Image Dehazing with Deep Learning . (2019) : 2741-2745 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Collaborative filtering algorithms have obvious advantages in recommendation accuracy, and Bandit's algorithm is a strategy to address diversity needs. The COFIBA algorithm combines the collaborative filtering algorithm with the Bandit algorithm to provide a solution for recommending the balance of diversity and accuracy. However, COFIBA does not consider the influence of time characteristics, and COFIBA is a cumulative regret. It is relatively slow to solve the problem of diversity. Therefore, this paper proposes a learning-based model. On the one hand, it introduces the openness characteristics of users to achieve diversity recommendation, and relies on the 'exploration-feedback-update' strategy to adjust the user's openness. At the same time, the time factor is incorporated into the COFIBA algorithm as a feature, and the change of user interest with time is analyzed to ensure the accuracy of recommendation. The experimental results show that the combination algorithm with time and open features has a significant improvement in the diversity and accuracy of the results compared with the COFIBA algorithm. © 2019 IEEE.
Keyword :
Big data Big data Cloud computing Cloud computing Collaborative filtering Collaborative filtering Signal filtering and prediction Signal filtering and prediction Social networking (online) Social networking (online)
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Yuxiang , Ye, Shaozhen . The influence of bandit-based user openness feature on recommendation diversity and accuracy [C] . 2019 : 1624-1628 . |
MLA | Lin, Yuxiang et al. "The influence of bandit-based user openness feature on recommendation diversity and accuracy" . (2019) : 1624-1628 . |
APA | Lin, Yuxiang , Ye, Shaozhen . The influence of bandit-based user openness feature on recommendation diversity and accuracy . (2019) : 1624-1628 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |