Query:
学者姓名:沈英
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
Attention mechanisms have been introduced to exploit deep-level information for image restoration by capturing feature dependencies. However, existing attention mechanisms often have limited perceptual capabilities and are incompatible with low-power devices due to computational resource constraints. Therefore, we propose a feature enhanced cascading attention network (FECAN) that introduces a novel feature enhanced cascading attention (FECA) mechanism, consisting of enhanced shuffle attention (ESA) and multi-scale large separable kernel attention (MLSKA). Specifically, ESA enhances high-frequency texture features in the feature maps, and MLSKA executes the further extraction. The rich and fine-grained high-frequency information are extracted and fused from multiple perceptual layers, thus improving super-resolution (SR) performance. To validate FECAN's effectiveness, we evaluate it with different complexities by stacking different numbers of high-frequency enhancement modules (HFEM) that contain FECA. Extensive experiments on benchmark datasets demonstrate that FECAN outperforms state-of-the-art lightweight SR networks in terms of objective evaluation metrics and subjective visual quality. Specifically, at a x 4 scale with a 121 K model size, compared to the second-ranked MAN-tiny, FECAN achieves a 0.07 dB improvement in average peak signal-to-noise ratio (PSNR), while reducing network parameters by approximately 19% and FLOPs by 20%. This demonstrates a better trade-off between SR performance and model complexity.
Keyword :
Convolution neural network Convolution neural network Enhanced shuffle attention Enhanced shuffle attention Lightweight image super-resolution Lightweight image super-resolution Multi-scale large separable kernel attention Multi-scale large separable kernel attention
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Feng , Liu, Hongwei , Chen, Liqiong et al. Feature enhanced cascading attention network for lightweight image super-resolution [J]. | SCIENTIFIC REPORTS , 2025 , 15 (1) . |
MLA | Huang, Feng et al. "Feature enhanced cascading attention network for lightweight image super-resolution" . | SCIENTIFIC REPORTS 15 . 1 (2025) . |
APA | Huang, Feng , Liu, Hongwei , Chen, Liqiong , Shen, Ying , Yu, Min . Feature enhanced cascading attention network for lightweight image super-resolution . | SCIENTIFIC REPORTS , 2025 , 15 (1) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Image super-resolution (SR) has recently gained traction in various fields, including remote sensing, biomedicine, and video surveillance. Nonetheless, the majority of advancements in SR have been achieved by scaling the architecture of convolutional neural networks, which inevitably increases computational complexity. In addition, most existing SR models struggle to effectively capture high-frequency information, resulting in overly smooth reconstructed images. To address this issue, we propose a lightweight Progressive Feature Aggregation Network (PFAN), which leverages Progressive Feature Aggregation Block to enhance different features through a progressive strategy. Specifically, we propose a Key Information Perception Module for capturing high-frequency details from cross-spatial-channel dimension to recover edge features. Besides, we design a Local Feature Enhancement Module, which effectively combines multi-scale convolutions for local feature extraction and Transformer for long-range dependencies modeling. Through the progressive fusion of rich edge details and texture features, our PFAN successfully achieves better reconstruction performance. Extensive experiments on five benchmark datasets demonstrate that PFAN outperforms state-of-the-art methods and strikes a better balance across SR performance, parameters, and computational complexity. Code can be available at https://github.com/handsomeyxk/PFAN.
Keyword :
CNN CNN Key information perception Key information perception Local feature enhancement Local feature enhancement Progressive feature aggregation network Progressive feature aggregation network Super-resolution Super-resolution Transformer Transformer
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Liqiong , Yang, Xiangkun , Wang, Shu et al. PFAN: progressive feature aggregation network for lightweight image super-resolution [J]. | VISUAL COMPUTER , 2025 . |
MLA | Chen, Liqiong et al. "PFAN: progressive feature aggregation network for lightweight image super-resolution" . | VISUAL COMPUTER (2025) . |
APA | Chen, Liqiong , Yang, Xiangkun , Wang, Shu , Shen, Ying , Wu, Jing , Huang, Feng et al. PFAN: progressive feature aggregation network for lightweight image super-resolution . | VISUAL COMPUTER , 2025 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
针对现有图像超分辨率重建方法存在模型复杂度过高和参数量过大等问题,文中提出基于多尺度空间自适应注意力网络(Multi-scale Spatial Adaptive Attention Network,MSAAN)的轻量级图像超分辨率重建方法.首先,设计全局特征调制模块(Global Feature Modulation Module,GFM),学习全局纹理特征.同时,设计轻量级的多尺度特征聚合模块(Multi-scale Feature Aggregation Module,MFA),自适应聚合局部至全局的高频空间特征.然后,融合GFM和MFA,提出多尺度空间自适应注意力模块(Multi-scale Spatial Adaptive Attention Module,MSAA).最后,通过特征交互门控前馈模块(Feature Interactive Gated Feed-Forward Module,FIGFF)增强局部信息提取能力,同时减少通道冗余.大量实验表明,MSAAN能捕捉更全面、更精细的特征,在保证轻量化的同时显著提升图像的重建效果.
Keyword :
Transformer Transformer 卷积神经网络 卷积神经网络 多尺度空间自适应注意力 多尺度空间自适应注意力 轻量级图像超分辨率重建 轻量级图像超分辨率重建
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 黄峰 , 刘鸿伟 , 沈英 et al. 基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法 [J]. | 模式识别与人工智能 , 2025 , 38 (1) : 36-50 . |
MLA | 黄峰 et al. "基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法" . | 模式识别与人工智能 38 . 1 (2025) : 36-50 . |
APA | 黄峰 , 刘鸿伟 , 沈英 , 裘兆炳 , 陈丽琼 . 基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法 . | 模式识别与人工智能 , 2025 , 38 (1) , 36-50 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The pedestrian detection network utilizing a combination of infrared and visible image pairs can improve detection accuracy by fusing their complementary information, especially in challenging illumination conditions. However, most existing dual-modality methods only focus on the effectiveness of feature maps between different modalities while neglecting the issue of redundant information in the modalities. This oversight often affects the detection performance in low illumination conditions. This paper proposes an efficient attention feature fusion network (EAFF-Net), which suppresses redundant information and enhances the fusion of features from dualmodality images. Firstly, we design a dual-backbone network based on CSPDarknet53 and combine with an efficient partial spatial pyramid pooling module (EPSPPM), improving the efficiency of feature extraction in different modalities. Secondly, a feature attention fusion module (FAFM) is built to adaptively weaken modal redundant information to improve the fusion effect of features. Finally, a deep attention pyramid module (DAPM) is proposed to cascade multi-scale feature information and obtain more detailed features of small targets. The effectiveness of EAFF-Net in pedestrian detection has been demonstrated through experiments conducted on two public datasets.
Keyword :
Deep learning Deep learning Feature attention Feature attention Multiscale features Multiscale features Pedestrian detection Pedestrian detection Visible and infrared images Visible and infrared images
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Xie, Xiaoyang , Wu, Jing et al. EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection [J]. | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 . |
MLA | Shen, Ying et al. "EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection" . | INFRARED PHYSICS & TECHNOLOGY 145 (2025) . |
APA | Shen, Ying , Xie, Xiaoyang , Wu, Jing , Chen, Liqiong , Huang, Feng . EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection . | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
光谱偏振探测技术利用目标多维度信息,提高伪装目标检测的精度和可靠性.现有的光谱偏振成像系统产生的高维度数据难以实时解算,复杂场景下光谱偏振探测的性能不佳.为此,提出一种基于特征波段偏振成像系统的伪装目标检测算法.针对目标场景筛选特征波段,定制751 nm窄带滤光片结合快照式偏振阵列相机,构建特征波段偏振图像采集系统,实时获取目标图像.提出差异增强和交织序列融合检测算法,设计偏振参数图像,增强特征波段偏振图像的目标对比度;融合差异增强和交织序列映射结果,对目标图像背景噪声进行抑制,进一步突出目标特征;通过阈值分割提取伪装目标.实验结果表明:所提的伪装目标检测算法在不同场景下的综合评价指标F均在0.90以上,检测速度达到20帧/s,实现了复杂场景下伪装目标的快速精准探测.
Keyword :
伪装目标检测 伪装目标检测 光谱偏振图像 光谱偏振图像 差异增强 差异增强 成像系统 成像系统 特征波段 特征波段
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 沈英 , 黄伟达 , 周则兵 et al. 基于特征波段偏振成像的差异增强伪装目标检测 [J]. | 兵工学报 , 2024 , 45 (10) : 3488-3498 . |
MLA | 沈英 et al. "基于特征波段偏振成像的差异增强伪装目标检测" . | 兵工学报 45 . 10 (2024) : 3488-3498 . |
APA | 沈英 , 黄伟达 , 周则兵 , 黄峰 , 王舒 . 基于特征波段偏振成像的差异增强伪装目标检测 . | 兵工学报 , 2024 , 45 (10) , 3488-3498 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The spectral polarization detection technology utilizes the multidimensional information to improve the accuracy and reliability of camouflage target detection. The high-dimensional data generated by the existing spectral polarization imaging systems are difficultly processedin real-time, and the performance of spectral polarization detection in complex scenes is unsatisfied. To address this concern, a contrast enhancement camouflage target detection algorithm based on feature band polarization imaging is proposed. Specific feature bands are selected for the target scenes, and a feature band polarization image acquisition system is constructed by combining a 751 nm narrowband filter with a snapshot polarized array camera to capture the feature band polarization images in real-time. Additionally, acontrast enhancement and interleaved sequence fusion detection (CEISFD) algorithm is proposed. It enhances the target contrast in the feature band polarization images through the designed polarization parameter image. The CEISFD algorithm fuses the results of contrast enhancement and interleaved sequence mapping, thus suppressing the background noises in the target images and further highlighting the target features. The camouflage target is then extracted by threshold segmentation. Experimental results demonstrate that the proposed algorithm achieves comprehensive evaluation metrics F above 0. 90 in various scenarios, and its detection rate reaches 20 FPS, enabling fast and accurate detection of camouflage targets in complex environments. © 2024 China Ordnance Industry Corporation. All rights reserved.
Keyword :
Camouflage Camouflage Image acquisition Image acquisition Image enhancement Image enhancement Image segmentation Image segmentation Light polarization Light polarization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Huang, Weida , Zhou, Zebing et al. Contrast Enhancement for Camouflage Target Detection Based on Feature Band Polarization Imaging [J]. | Acta Armamentarii , 2024 , 45 (10) : 3488-3498 . |
MLA | Shen, Ying et al. "Contrast Enhancement for Camouflage Target Detection Based on Feature Band Polarization Imaging" . | Acta Armamentarii 45 . 10 (2024) : 3488-3498 . |
APA | Shen, Ying , Huang, Weida , Zhou, Zebing , Huang, Feng , Wang, Shu . Contrast Enhancement for Camouflage Target Detection Based on Feature Band Polarization Imaging . | Acta Armamentarii , 2024 , 45 (10) , 3488-3498 . |
Export to | NoteExpress RIS BibTex |
Version :
Keyword :
biomolecular interaction biomolecular interaction biosensing biosensing differential measurement differential measurement label-free detection label-free detection phase-sensitive interferometry phase-sensitive interferometry
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Huang, Zeyu , Huang, Feng et al. A self-reference interference sensor based on coherence multiplexing (vol 10, 880081, 2022) [J]. | FRONTIERS IN CHEMISTRY , 2024 , 12 . |
MLA | Shen, Ying et al. "A self-reference interference sensor based on coherence multiplexing (vol 10, 880081, 2022)" . | FRONTIERS IN CHEMISTRY 12 (2024) . |
APA | Shen, Ying , Huang, Zeyu , Huang, Feng , He, Yonghong , Ye, Ziling , Zhang, Hongjian et al. A self-reference interference sensor based on coherence multiplexing (vol 10, 880081, 2022) . | FRONTIERS IN CHEMISTRY , 2024 , 12 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Objective Infrared and visible light images exhibit significant differences in spectral properties due to their distinct imaging mechanisms. These differences often result in a high mismatch rate of feature points between the two types of images. Currently, widely used mismatch rejection algorithms, such as random sample consensus (RANSAC) and its variants, typically employ a strategy of random sampling combined with iterative optimization modeling for consistency fitting. However, when aligning heterogeneous images with high outlier rates, these methods often struggle to balance alignment accuracy and speed, leading to a high number of iterations or weak robustness. To address the relatively fixed positions of infrared and visible detectors in dual-modal imaging systems, we propose a spatial constraints priority sampling consensus (SC-PRISAC) algorithm. This algorithm leverages image space constraints to provide a robust inlier screening mechanism and an efficient sampling strategy, thus offering stable and reliable support for the fusion of infrared and visible image information. Methods In this study, a bispectral calibration target with both infrared and visible features is designed based on differences in material radiance. We achieve high-precision binocular camera calibration by accurately determining the internal and external parameters of the camera using a bilateral filtering pyramid. Based on this calibration, the spatial relationship between heterogeneous images is constructed using the epipolar constraint theorem and the principle of depth consistency. By implementing a priority sampling strategy based on the matching quality ranking of feature points, the number of iterations required by the algorithm is significantly reduced, allowing for precise and efficient elimination of mismatched feature points. Results and Discussions Our method’s calibration accuracy is assessed through the mean reprojection error (MRE), with comparative results presented in Table 1 and Fig. 7. The findings demonstrate a 58.2% improvement in calibration precision over the spot detection calibration technique provided by OpenCV, reducing the calibration error to 0.430 pixels. In the outlier rejection experiment, the progression of feature point matching across stages is detailed in Table 2. Following the introduction of spatial constraints, all valid matches are retained, and 27 outlier pairs are discarded. An additional 10 outlier pairs are further eliminated through preferential sampling strategies. To comprehensively evaluate the algorithm’s performance, several comparative methods, including RANSAC, degenerate sample consensus (DEGENSAC), MAGSAC++, graph-cut RANSAC (GC-RANSAC), Bayesian network for adaptive sample consensus (BANSAC), and a neural network-based ∇-RANSAC, are employed, with evaluations based on inlier counts, homography estimation errors, accuracy, and computational runtime as shown in Table 3 and Fig. 12. The proposed algorithm achieves a notably low homography estimation error of 7.857 with a runtime of just 1.919 ms, outperforming all comparative methods. This superior performance is primarily due to the SC-PRISAC algorithm’s robust spatial constraint mechanism, which effectively filters out outliers that contradict imaging principles, enabling more accurate sampling and fitting. In addition, the robustness of the proposed method and competing algorithms under complex scenarios is investigated by varying the proportion of outliers in initial datasets, as illustrated in Fig. 13. All algorithms perform satisfactorily when outlier ratios are below 45%. However, as the outlier ratio escalates, the precision of traditional methods like RANSAC deteriorates significantly. Remarkably, even at an extreme outlier ratio of 95%, SC-PRISAC maintains an accuracy rate of 70.2%, whereas other algorithms’accuracies drop to between 12% and 49%. These results highlight the significant advantage of the proposed method in scenarios with high mismatch rates, demonstrating its superior applicability and effectiveness in aligning infrared and visible light images under challenging conditions. Conclusions To address the challenge of high mismatch rates in infrared and visible image alignment, we propose an algorithm for rejecting mismatched feature points based on optimizing camera spatial relations. By designing a bispectral calibration target and improving the circular centroid positioning algorithm, sub-pixel-level infrared and visible binocular camera calibration is achieved, with the calibration error controlled within 0.430 pixel, significantly enhancing camera calibration accuracy. The algorithm integrates spatial constraints based on epipolar geometry and depth consistency to accurately exclude mismatched features that violate physical imaging laws and reduces computational complexity through an intelligent sampling strategy that prioritizes high-quality feature points. Experimental results show that the proposed method achieves a homography estimation error of 7.857, a processing speed of 1.919 ms, and maintains excellent performance even under high outlier ratios, outperforming other mismatched feature point rejection algorithms and proving its superior generalization and reliability in addressing infrared and visible image alignment problems. © 2024 Chinese Optical Society. All rights reserved.
Keyword :
Binoculars Binoculars Cameras Cameras Image enhancement Image enhancement Image matching Image matching Image registration Image registration Image sampling Image sampling Infrared detectors Infrared detectors Stereo image processing Stereo image processing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Lin, Ye , Chen, Haitao et al. Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints [J]. | Acta Optica Sinica , 2024 , 44 (20) . |
MLA | Shen, Ying et al. "Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints" . | Acta Optica Sinica 44 . 20 (2024) . |
APA | Shen, Ying , Lin, Ye , Chen, Haitao , Wu, Jing , Huang, Feng . Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints . | Acta Optica Sinica , 2024 , 44 (20) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
叶黄素是天然的抗氧化剂,对人体健康有多种益处,异养小球藻具有叶黄素纯度和产量均较高的优势,而小球藻叶黄素产量主要取决于生物质产量和叶黄素含量两个因素.传统的光密度法测生物质产量和高效液相色谱法测叶黄素含量存在操作复杂、时效性低等不足.为了快速、无损测定小球藻生长过程中叶黄素含量变化,搭建可见-近红外双模式快照式多光谱成像检测系统,根据光谱响应区域,分别利用可见光相机获取叶黄素光谱信息,近红外相机获取生物质光谱信息,构建含有生物质量和叶黄素含量信息的可见-近红外双模式多光谱数据集.针对系统所使用的快照式多光谱相机光谱范围宽、波长数量少的特征波长选取问题,提出一种结合序列浮动前向选择的改进型连续投影算法(mSPA);将mSPA与常规的连续投影算法、遗传算法及随机蛙跳三种波长选择算法作对比分析后,构建了基于特征波长的多元线性回归和极限学习机模型;最后,利用生物质产量和叶黄素含量的最佳预测模型生成小球藻叶黄素产量的可视化分布图.结果表明,在利用近红外、可见光相机分别检测小球藻生物质、叶黄素量时,mSPA得到的特征波长数均较少,并具有最高的预测精度.生物质量与叶黄素含量的最佳模型均为mSPA筛选特征波长后建立的极限学习机模型,对应的预测集决定系数分别为0.947和0.907,预测集均方根误差分别为0.698 g·L-1和0.077 mg·g-1,剩余预测偏差分别为3.535和3.338,模型的预测能力较好.可视化分布实现了直观监测小球藻叶黄素产量的变化,有助于后续实际生产中在线检测叶黄素产量.mSPA在快照式多光谱检测小球藻生物质含量及叶黄素含量中,通过对排序波长逐个评估以选择出最佳特征波长组合,有效地避免了特征波长的错选、漏选,提高了模型的预测精度,为快照式多光谱成像技术应用提供新的波长选择思路.
Keyword :
叶黄素产量 叶黄素产量 小球藻 小球藻 快照式多光谱 快照式多光谱 特征波长 特征波长
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 沈英 , 占秀兴 , 黄春红 et al. 基于快照式多光谱特征波长的小球藻叶黄素产量快速测定 [J]. | 光谱学与光谱分析 , 2024 , 44 (8) : 2216-2223 . |
MLA | 沈英 et al. "基于快照式多光谱特征波长的小球藻叶黄素产量快速测定" . | 光谱学与光谱分析 44 . 8 (2024) : 2216-2223 . |
APA | 沈英 , 占秀兴 , 黄春红 , 谢友坪 , 郭翠霞 , 黄峰 . 基于快照式多光谱特征波长的小球藻叶黄素产量快速测定 . | 光谱学与光谱分析 , 2024 , 44 (8) , 2216-2223 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Polarization can improve the autonomous reconnaissance capability of unmanned aerial vehicle, but it is easily interfered by the variation of detection angle and target materials, which affects the robustness of polarization detection. In this paper, a real-time low-altitude camouflaged target detection algorithm of YOLO-Polarization based on polarized images is proposed. The coded image fused with multi-polarization direction information is used as input, the 3D convolution module is applied to extract the connection features from the different polarization direction images, and a feature enhancement module (FEM) is introduced to further enhance the multi-level features. In addition, the cross-level feature aggregation network is adopted to make full use of the feature information of different scales to complete the effective aggregation of features, and finally combined with multi-channel feature information output detection results. A dataset consisting of polarized images of low-altitude camouflaged targets (PICO) which include 10 types of targets is constructed. The experimental results based on PICO dataset show that the proposed method can effectively detect the camouflaged targets, with mAP0. 5:0. 95 up to 52. 0% and mAP0. 5 up to 91. 5% . The detection rate achieves 55. 0 frames / s, which meets the requirement of real-time detection. © 2024 China Ordnance Industry Corporation. All rights reserved.
Keyword :
Aircraft detection Aircraft detection Antennas Antennas Deep learning Deep learning Feature extraction Feature extraction Image enhancement Image enhancement Polarization Polarization Signal detection Signal detection Unmanned aerial vehicles (UAV) Unmanned aerial vehicles (UAV)
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Liu, Xiancai , Wang, Shu et al. Real-time Detection of Low-altitude Camouflaged Targets Based on Polarization Encoded Images [J]. | Acta Armamentarii , 2024 , 45 (5) : 1374-1383 . |
MLA | Shen, Ying et al. "Real-time Detection of Low-altitude Camouflaged Targets Based on Polarization Encoded Images" . | Acta Armamentarii 45 . 5 (2024) : 1374-1383 . |
APA | Shen, Ying , Liu, Xiancai , Wang, Shu , Huang, Feng . Real-time Detection of Low-altitude Camouflaged Targets Based on Polarization Encoded Images . | Acta Armamentarii , 2024 , 45 (5) , 1374-1383 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |