• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 2 >
基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法
期刊论文 | 2025 , 38 (1) , 36-50 | 模式识别与人工智能
Abstract&Keyword Cite

Abstract :

针对现有图像超分辨率重建方法存在模型复杂度过高和参数量过大等问题,文中提出基于多尺度空间自适应注意力网络(Multi-scale Spatial Adaptive Attention Network,MSAAN)的轻量级图像超分辨率重建方法.首先,设计全局特征调制模块(Global Feature Modulation Module,GFM),学习全局纹理特征.同时,设计轻量级的多尺度特征聚合模块(Multi-scale Feature Aggregation Module,MFA),自适应聚合局部至全局的高频空间特征.然后,融合GFM和MFA,提出多尺度空间自适应注意力模块(Multi-scale Spatial Adaptive Attention Module,MSAA).最后,通过特征交互门控前馈模块(Feature Interactive Gated Feed-Forward Module,FIGFF)增强局部信息提取能力,同时减少通道冗余.大量实验表明,MSAAN能捕捉更全面、更精细的特征,在保证轻量化的同时显著提升图像的重建效果.

Keyword :

Transformer Transformer 卷积神经网络 卷积神经网络 多尺度空间自适应注意力 多尺度空间自适应注意力 轻量级图像超分辨率重建 轻量级图像超分辨率重建

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 黄峰 , 刘鸿伟 , 沈英 et al. 基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法 [J]. | 模式识别与人工智能 , 2025 , 38 (1) : 36-50 .
MLA 黄峰 et al. "基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法" . | 模式识别与人工智能 38 . 1 (2025) : 36-50 .
APA 黄峰 , 刘鸿伟 , 沈英 , 裘兆炳 , 陈丽琼 . 基于多尺度空间自适应注意力网络的轻量级图像超分辨率方法 . | 模式识别与人工智能 , 2025 , 38 (1) , 36-50 .
Export to NoteExpress RIS BibTex

Version :

PFAN: progressive feature aggregation network for lightweight image super-resolution SCIE
期刊论文 | 2025 | VISUAL COMPUTER
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(1)

Abstract :

Image super-resolution (SR) has recently gained traction in various fields, including remote sensing, biomedicine, and video surveillance. Nonetheless, the majority of advancements in SR have been achieved by scaling the architecture of convolutional neural networks, which inevitably increases computational complexity. In addition, most existing SR models struggle to effectively capture high-frequency information, resulting in overly smooth reconstructed images. To address this issue, we propose a lightweight Progressive Feature Aggregation Network (PFAN), which leverages Progressive Feature Aggregation Block to enhance different features through a progressive strategy. Specifically, we propose a Key Information Perception Module for capturing high-frequency details from cross-spatial-channel dimension to recover edge features. Besides, we design a Local Feature Enhancement Module, which effectively combines multi-scale convolutions for local feature extraction and Transformer for long-range dependencies modeling. Through the progressive fusion of rich edge details and texture features, our PFAN successfully achieves better reconstruction performance. Extensive experiments on five benchmark datasets demonstrate that PFAN outperforms state-of-the-art methods and strikes a better balance across SR performance, parameters, and computational complexity. Code can be available at https://github.com/handsomeyxk/PFAN.

Keyword :

CNN CNN Key information perception Key information perception Local feature enhancement Local feature enhancement Progressive feature aggregation network Progressive feature aggregation network Super-resolution Super-resolution Transformer Transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Liqiong , Yang, Xiangkun , Wang, Shu et al. PFAN: progressive feature aggregation network for lightweight image super-resolution [J]. | VISUAL COMPUTER , 2025 .
MLA Chen, Liqiong et al. "PFAN: progressive feature aggregation network for lightweight image super-resolution" . | VISUAL COMPUTER (2025) .
APA Chen, Liqiong , Yang, Xiangkun , Wang, Shu , Shen, Ying , Wu, Jing , Huang, Feng et al. PFAN: progressive feature aggregation network for lightweight image super-resolution . | VISUAL COMPUTER , 2025 .
Export to NoteExpress RIS BibTex

Version :

PFAN: progressive feature aggregation network for lightweight image super-resolution Scopus
期刊论文 | 2025 | Visual Computer
Feature enhanced cascading attention network for lightweight image super-resolution SCIE
期刊论文 | 2025 , 15 (1) | SCIENTIFIC REPORTS
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(1)

Abstract :

Attention mechanisms have been introduced to exploit deep-level information for image restoration by capturing feature dependencies. However, existing attention mechanisms often have limited perceptual capabilities and are incompatible with low-power devices due to computational resource constraints. Therefore, we propose a feature enhanced cascading attention network (FECAN) that introduces a novel feature enhanced cascading attention (FECA) mechanism, consisting of enhanced shuffle attention (ESA) and multi-scale large separable kernel attention (MLSKA). Specifically, ESA enhances high-frequency texture features in the feature maps, and MLSKA executes the further extraction. The rich and fine-grained high-frequency information are extracted and fused from multiple perceptual layers, thus improving super-resolution (SR) performance. To validate FECAN's effectiveness, we evaluate it with different complexities by stacking different numbers of high-frequency enhancement modules (HFEM) that contain FECA. Extensive experiments on benchmark datasets demonstrate that FECAN outperforms state-of-the-art lightweight SR networks in terms of objective evaluation metrics and subjective visual quality. Specifically, at a x 4 scale with a 121 K model size, compared to the second-ranked MAN-tiny, FECAN achieves a 0.07 dB improvement in average peak signal-to-noise ratio (PSNR), while reducing network parameters by approximately 19% and FLOPs by 20%. This demonstrates a better trade-off between SR performance and model complexity.

Keyword :

Convolution neural network Convolution neural network Enhanced shuffle attention Enhanced shuffle attention Lightweight image super-resolution Lightweight image super-resolution Multi-scale large separable kernel attention Multi-scale large separable kernel attention

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Feng , Liu, Hongwei , Chen, Liqiong et al. Feature enhanced cascading attention network for lightweight image super-resolution [J]. | SCIENTIFIC REPORTS , 2025 , 15 (1) .
MLA Huang, Feng et al. "Feature enhanced cascading attention network for lightweight image super-resolution" . | SCIENTIFIC REPORTS 15 . 1 (2025) .
APA Huang, Feng , Liu, Hongwei , Chen, Liqiong , Shen, Ying , Yu, Min . Feature enhanced cascading attention network for lightweight image super-resolution . | SCIENTIFIC REPORTS , 2025 , 15 (1) .
Export to NoteExpress RIS BibTex

Version :

Feature enhanced cascading attention network for lightweight image super-resolution Scopus
期刊论文 | 2025 , 15 (1) | Scientific Reports
EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection SCIE
期刊论文 | 2025 , 145 | INFRARED PHYSICS & TECHNOLOGY
Abstract&Keyword Cite Version(2)

Abstract :

The pedestrian detection network utilizing a combination of infrared and visible image pairs can improve detection accuracy by fusing their complementary information, especially in challenging illumination conditions. However, most existing dual-modality methods only focus on the effectiveness of feature maps between different modalities while neglecting the issue of redundant information in the modalities. This oversight often affects the detection performance in low illumination conditions. This paper proposes an efficient attention feature fusion network (EAFF-Net), which suppresses redundant information and enhances the fusion of features from dualmodality images. Firstly, we design a dual-backbone network based on CSPDarknet53 and combine with an efficient partial spatial pyramid pooling module (EPSPPM), improving the efficiency of feature extraction in different modalities. Secondly, a feature attention fusion module (FAFM) is built to adaptively weaken modal redundant information to improve the fusion effect of features. Finally, a deep attention pyramid module (DAPM) is proposed to cascade multi-scale feature information and obtain more detailed features of small targets. The effectiveness of EAFF-Net in pedestrian detection has been demonstrated through experiments conducted on two public datasets.

Keyword :

Deep learning Deep learning Feature attention Feature attention Multiscale features Multiscale features Pedestrian detection Pedestrian detection Visible and infrared images Visible and infrared images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Xie, Xiaoyang , Wu, Jing et al. EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection [J]. | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 .
MLA Shen, Ying et al. "EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection" . | INFRARED PHYSICS & TECHNOLOGY 145 (2025) .
APA Shen, Ying , Xie, Xiaoyang , Wu, Jing , Chen, Liqiong , Huang, Feng . EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection . | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 .
Export to NoteExpress RIS BibTex

Version :

EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection EI
期刊论文 | 2025 , 145 | Infrared Physics and Technology
EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection Scopus
期刊论文 | 2025 , 145 | Infrared Physics and Technology
Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images SCIE CSCD
期刊论文 | 2024 , 34 , 269-281 | DEFENCE TECHNOLOGY
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(2)

Abstract :

Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO (MS-YOLO), which utilizes the SPD-Conv and SimAM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset (MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield. (c) 2023 China Ordnance Society. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co. Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).

Keyword :

Camouflaged people detection Camouflaged people detection Complex remote sensing scenes Complex remote sensing scenes MS-YOLO MS-YOLO Optimal band selection Optimal band selection Snapshot multispectral imaging Snapshot multispectral imaging

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Shu , Zeng, Dawei , Xu, Yixuan et al. Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images [J]. | DEFENCE TECHNOLOGY , 2024 , 34 : 269-281 .
MLA Wang, Shu et al. "Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images" . | DEFENCE TECHNOLOGY 34 (2024) : 269-281 .
APA Wang, Shu , Zeng, Dawei , Xu, Yixuan , Yang, Gonghan , Huang, Feng , Chen, Liqiong . Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images . | DEFENCE TECHNOLOGY , 2024 , 34 , 269-281 .
Export to NoteExpress RIS BibTex

Version :

Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images Scopus CSCD
期刊论文 | 2024 , 34 , 269-281 | Defence Technology
Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images EI CSCD
期刊论文 | 2024 , 34 , 269-281 | Defence Technology
Robust Unsupervised Multifeature Representation for Infrared Small Target Detection SCIE
期刊论文 | 2024 , 17 , 10306-10323 | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
Abstract&Keyword Cite Version(2)

Abstract :

Infrared small target detection is critical to infrared search and tracking systems. However, accurate and robust detection remains challenging due to the scarcity of target information and the complexity of clutter interference. Existing methods have some limitations in feature representation, leading to poor detection performance in complex scenes. Especially when there are sharp edges near the target or in cluster multitarget detection, the "target suppression" phenomenon tends to occur. To address this issue, we propose a robust unsupervised multifeature representation (RUMFR) method for infrared small target detection. On the one hand, robust unsupervised spatial clustering (RUSC) is designed to improve the accuracy of feature extraction; on the other hand, pixel-level multiple feature representation is proposed to fully utilize the target detail information. Specifically, we first propose the center-weighted interclass difference measure (CWIDM) with a trilayer design for fast candidate target extraction. Note that CWIDM also guides the parameter settings of RUSC. Then, the RUSC-based model is constructed to accurately extract target features in complex scenes. By designing the parameter adaptive strategy and iterative clustering strategy, RUSC can robustly segment cluster multitargets from complex backgrounds. Finally, RUMFR that fuses pixel-level contrast, distribution, and directional gradient features is proposed for better target representation and clutter suppression. Extensive experimental results show that our method has stronger feature representation capability and achieves better detection performance than several state-of-the-art methods.

Keyword :

Clutter Clutter Feature extraction Feature extraction Fuses Fuses Image edge detection Image edge detection Infrared small target detection Infrared small target detection Noise Noise Object detection Object detection pixel-level multifeature representation pixel-level multifeature representation robust unsupervised spatial clustering (RUSC) robust unsupervised spatial clustering (RUSC) Sparse matrices Sparse matrices "target suppression" phenomenon "target suppression" phenomenon

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Liqiong , Wu, Tong , Zheng, Shuyuan et al. Robust Unsupervised Multifeature Representation for Infrared Small Target Detection [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 : 10306-10323 .
MLA Chen, Liqiong et al. "Robust Unsupervised Multifeature Representation for Infrared Small Target Detection" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 17 (2024) : 10306-10323 .
APA Chen, Liqiong , Wu, Tong , Zheng, Shuyuan , Qiu, Zhaobing , Huang, Feng . Robust Unsupervised Multifeature Representation for Infrared Small Target Detection . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 , 10306-10323 .
Export to NoteExpress RIS BibTex

Version :

Robust Unsupervised Multifeature Representation for Infrared Small Target Detection EI
期刊论文 | 2024 , 17 , 10306-10323 | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Robust Unsupervised Multi-Feature Representation for Infrared Small Target Detection Scopus
期刊论文 | 2024 , 17 , 1-18 | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images
期刊论文 | 2024 , 34 (4) , 269-281 | 防务技术
Abstract&Keyword Cite

Abstract :

Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment.Despite advancements in optical detection capabilities through im-aging systems,including spectral,polarization,and infrared technologies,there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes.Here,this study proposes a snapshot multispectral image-based camouflaged detection model,multispectral YOLO(MS-YOLO),which utilizes the SPD-Conv and SimAM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information.Besides,the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD),which encompasses diverse scenes,target scales,and attitudes.To minimize infor-mation redundancy,MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input.Through experiments on the MSCPD,MS-YOLO achieves a mean Average Precision of 94.31%and real-time detection at 65 frames per second,which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes.Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shu Wang , Dawei Zeng , Yixuan Xu et al. Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images [J]. | 防务技术 , 2024 , 34 (4) : 269-281 .
MLA Shu Wang et al. "Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images" . | 防务技术 34 . 4 (2024) : 269-281 .
APA Shu Wang , Dawei Zeng , Yixuan Xu , Gonghan Yang , Feng Huang , Liqiong Chen . Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images . | 防务技术 , 2024 , 34 (4) , 269-281 .
Export to NoteExpress RIS BibTex

Version :

FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images SCIE
期刊论文 | 2024 , 16 (13) | REMOTE SENSING
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Object detection in remote sensing images has become a crucial component of computer vision. It has been employed in multiple domains, including military surveillance, maritime rescue, and military operations. However, the high density of small objects in remote sensing images makes it challenging for existing networks to accurately distinguish objects from shallow image features. These factors contribute to many object detection networks that produce missed detections and false alarms, particularly for densely arranged objects and small objects. To address the above problems, this paper proposes a feature enhancement feedforward network (FEFN), based on a lightweight channel feedforward module (LCFM) and a feature enhancement module (FEM). First, the FEFN captures shallow spatial information in images through a lightweight channel feedforward module that can extract the edge information of small objects such as ships. Next, it enhances the feature interaction and representation by utilizing a feature enhancement module that can achieve more accurate detection results for densely arranged objects and small objects. Finally, comparative experiments on two publicly challenging remote sensing datasets demonstrate the effectiveness of the proposed method.

Keyword :

channel feedforward channel feedforward feature enhancement feature enhancement object detection object detection remote sensing remote sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Ni, Rixiang , Chen, Zhenhua et al. FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images [J]. | REMOTE SENSING , 2024 , 16 (13) .
MLA Wu, Jing et al. "FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images" . | REMOTE SENSING 16 . 13 (2024) .
APA Wu, Jing , Ni, Rixiang , Chen, Zhenhua , Huang, Feng , Chen, Liqiong . FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images . | REMOTE SENSING , 2024 , 16 (13) .
Export to NoteExpress RIS BibTex

Version :

FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images EI
期刊论文 | 2024 , 16 (13) | Remote Sensing
FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images Scopus
期刊论文 | 2024 , 16 (13) | Remote Sensing
Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution SCIE
期刊论文 | 2023 , 23 (8) | SENSORS
WoS CC Cited Count: 3
Abstract&Keyword Cite Version(2)

Abstract :

Deployment of deep convolutional neural networks (CNNs) in single image super-resolution (SISR) for edge computing devices is mainly hampered by the huge computational cost. In this work, we propose a lightweight image super-resolution (SR) network based on a reparameterizable multibranch bottleneck module (RMBM). In the training phase, RMBM efficiently extracts high-frequency information by utilizing multibranch structures, including bottleneck residual block (BRB), inverted bottleneck residual block (IBRB), and expand-squeeze convolution block (ESB). In the inference phase, the multibranch structures can be combined into a single 3 x 3 convolution to reduce the number of parameters without incurring any additional computational cost. Furthermore, a novel peak-structure-edge (PSE) loss is proposed to resolve the problem of oversmoothed reconstructed images while significantly improving image structure similarity. Finally, we optimize and deploy the algorithm on the edge devices equipped with the rockchip neural processor unit (RKNPU) to achieve real-time SR reconstruction. Extensive experiments on natural image datasets and remote sensing image datasets show that our network outperforms advanced lightweight SR networks regarding objective evaluation metrics and subjective vision quality. The reconstruction results demonstrate that the proposed network can achieve higher SR performance with a 98.1 K model size, which can be effectively deployed to edge computing devices.

Keyword :

edge computing device edge computing device lightweight image super-resolution lightweight image super-resolution PSE loss PSE loss reparameterizable multibranch bottleneck module reparameterizable multibranch bottleneck module

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Zheng, Weihuang , Huang, Feng et al. Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution [J]. | SENSORS , 2023 , 23 (8) .
MLA Shen, Ying et al. "Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution" . | SENSORS 23 . 8 (2023) .
APA Shen, Ying , Zheng, Weihuang , Huang, Feng , Wu, Jing , Chen, Liqiong . Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution . | SENSORS , 2023 , 23 (8) .
Export to NoteExpress RIS BibTex

Version :

Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution EI
期刊论文 | 2023 , 23 (8) | Sensors
Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution Scopus
期刊论文 | 2023 , 23 (8) | Sensors
RSHAN: Image super-resolution network based on residual separation hybrid attention module SCIE
期刊论文 | 2023 , 122 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
WoS CC Cited Count: 11
Abstract&Keyword Cite Version(2)

Abstract :

Transformer has become one of the main architectures in deep learning, showing impressive performance in various vision tasks, especially for image super-resolution (SR). However, due to the usage of high -resolution input images, most current Transformer-based image super-resolution models have a large number of parameters and high computational complexity. Moreover, some components employed in the Transformer may be redundant for SR tasks, which may limit the SR performance. In this work, we propose an efficient and concise model for image super-resolution tasks termed Residual Separation Hybrid Attention Network (RSHAN), which aims to solve the problems of redundant components and insufficient ability to extract high-frequency information of Transformer. Specifically, we present the Residual Separation Hybrid Attention Module (RSHAM) which fuses the local features extracted by the convolutional neural network (CNN) branch and the long-range dependencies extracted by Transformers to improve the performance of RSHAN. Extensive experiments on numerous benchmark datasets show that the proposed method outperforms state-of-the-art SR methods by up to 0.11 dB in peak signal-to-noise ratio (PSNR) metric, while the computational complexity and the inference time is reduced by 5% and 10%, respectively.

Keyword :

Convolutional neural network Convolutional neural network Image super-resolution Image super-resolution Residual separation hybrid attention module Residual separation hybrid attention module Transformer Transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Zheng, Weihuang , Chen, Liqiong et al. RSHAN: Image super-resolution network based on residual separation hybrid attention module [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 122 .
MLA Shen, Ying et al. "RSHAN: Image super-resolution network based on residual separation hybrid attention module" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 122 (2023) .
APA Shen, Ying , Zheng, Weihuang , Chen, Liqiong , Huang, Feng . RSHAN: Image super-resolution network based on residual separation hybrid attention module . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 122 .
Export to NoteExpress RIS BibTex

Version :

RSHAN: Image super-resolution network based on residual separation hybrid attention module EI
期刊论文 | 2023 , 122 | Engineering Applications of Artificial Intelligence
RSHAN: Image super-resolution network based on residual separation hybrid attention module Scopus
期刊论文 | 2023 , 122 | Engineering Applications of Artificial Intelligence
10| 20| 50 per page
< Page ,Total 2 >

Export

Results:

Selected

to

Format:
Online/Total:48/10064229
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1