• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:吴靖

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 5 >
PFAN: progressive feature aggregation network for lightweight image super-resolution SCIE
期刊论文 | 2025 | VISUAL COMPUTER
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(1)

Abstract :

Image super-resolution (SR) has recently gained traction in various fields, including remote sensing, biomedicine, and video surveillance. Nonetheless, the majority of advancements in SR have been achieved by scaling the architecture of convolutional neural networks, which inevitably increases computational complexity. In addition, most existing SR models struggle to effectively capture high-frequency information, resulting in overly smooth reconstructed images. To address this issue, we propose a lightweight Progressive Feature Aggregation Network (PFAN), which leverages Progressive Feature Aggregation Block to enhance different features through a progressive strategy. Specifically, we propose a Key Information Perception Module for capturing high-frequency details from cross-spatial-channel dimension to recover edge features. Besides, we design a Local Feature Enhancement Module, which effectively combines multi-scale convolutions for local feature extraction and Transformer for long-range dependencies modeling. Through the progressive fusion of rich edge details and texture features, our PFAN successfully achieves better reconstruction performance. Extensive experiments on five benchmark datasets demonstrate that PFAN outperforms state-of-the-art methods and strikes a better balance across SR performance, parameters, and computational complexity. Code can be available at https://github.com/handsomeyxk/PFAN.

Keyword :

CNN CNN Key information perception Key information perception Local feature enhancement Local feature enhancement Progressive feature aggregation network Progressive feature aggregation network Super-resolution Super-resolution Transformer Transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Liqiong , Yang, Xiangkun , Wang, Shu et al. PFAN: progressive feature aggregation network for lightweight image super-resolution [J]. | VISUAL COMPUTER , 2025 .
MLA Chen, Liqiong et al. "PFAN: progressive feature aggregation network for lightweight image super-resolution" . | VISUAL COMPUTER (2025) .
APA Chen, Liqiong , Yang, Xiangkun , Wang, Shu , Shen, Ying , Wu, Jing , Huang, Feng et al. PFAN: progressive feature aggregation network for lightweight image super-resolution . | VISUAL COMPUTER , 2025 .
Export to NoteExpress RIS BibTex

Version :

PFAN: progressive feature aggregation network for lightweight image super-resolution Scopus
期刊论文 | 2025 | Visual Computer
EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection SCIE
期刊论文 | 2025 , 145 | INFRARED PHYSICS & TECHNOLOGY
Abstract&Keyword Cite Version(2)

Abstract :

The pedestrian detection network utilizing a combination of infrared and visible image pairs can improve detection accuracy by fusing their complementary information, especially in challenging illumination conditions. However, most existing dual-modality methods only focus on the effectiveness of feature maps between different modalities while neglecting the issue of redundant information in the modalities. This oversight often affects the detection performance in low illumination conditions. This paper proposes an efficient attention feature fusion network (EAFF-Net), which suppresses redundant information and enhances the fusion of features from dualmodality images. Firstly, we design a dual-backbone network based on CSPDarknet53 and combine with an efficient partial spatial pyramid pooling module (EPSPPM), improving the efficiency of feature extraction in different modalities. Secondly, a feature attention fusion module (FAFM) is built to adaptively weaken modal redundant information to improve the fusion effect of features. Finally, a deep attention pyramid module (DAPM) is proposed to cascade multi-scale feature information and obtain more detailed features of small targets. The effectiveness of EAFF-Net in pedestrian detection has been demonstrated through experiments conducted on two public datasets.

Keyword :

Deep learning Deep learning Feature attention Feature attention Multiscale features Multiscale features Pedestrian detection Pedestrian detection Visible and infrared images Visible and infrared images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Xie, Xiaoyang , Wu, Jing et al. EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection [J]. | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 .
MLA Shen, Ying et al. "EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection" . | INFRARED PHYSICS & TECHNOLOGY 145 (2025) .
APA Shen, Ying , Xie, Xiaoyang , Wu, Jing , Chen, Liqiong , Huang, Feng . EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection . | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 .
Export to NoteExpress RIS BibTex

Version :

EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection EI
期刊论文 | 2025 , 145 | Infrared Physics and Technology
EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection Scopus
期刊论文 | 2025 , 145 | Infrared Physics and Technology
Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net Scopus
期刊论文 | 2024 , 24 (14) | Sensors
SCOPUS Cited Count: 1
Abstract&Keyword Cite

Abstract :

Infrared images hold significant value in applications such as remote sensing and fire safety. However, infrared detectors often face the problem of high hardware costs, which limits their widespread use. Advancements in deep learning have spurred innovative approaches to image super-resolution (SR), but comparatively few efforts have been dedicated to the exploration of infrared images. To address this, we design the Residual Swin Transformer and Average Pooling Block (RSTAB) and propose the SwinAIR, which can effectively extract and fuse the diverse frequency features in infrared images and achieve superior SR reconstruction performance. By further integrating SwinAIR with U-Net, we propose the SwinAIR-GAN for real infrared image SR reconstruction. SwinAIR-GAN extends the degradation space to better simulate the degradation process of real infrared images. Additionally, it incorporates spectral normalization, dropout, and artifact discrimination loss to reduce the potential image artifacts. Qualitative and quantitative evaluations on various datasets confirm the effectiveness of our proposed method in reconstructing realistic textures and details of infrared images. © 2024 by the authors.

Keyword :

generative adversarial network generative adversarial network image super-resolution image super-resolution infrared image infrared image transformer transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, F. , Li, Y. , Ye, X. et al. Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net [J]. | Sensors , 2024 , 24 (14) .
MLA Huang, F. et al. "Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net" . | Sensors 24 . 14 (2024) .
APA Huang, F. , Li, Y. , Ye, X. , Wu, J. . Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net . | Sensors , 2024 , 24 (14) .
Export to NoteExpress RIS BibTex

Version :

FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images EI
期刊论文 | 2024 , 16 (13) | Remote Sensing
Abstract&Keyword Cite

Abstract :

Object detection in remote sensing images has become a crucial component of computer vision. It has been employed in multiple domains, including military surveillance, maritime rescue, and military operations. However, the high density of small objects in remote sensing images makes it challenging for existing networks to accurately distinguish objects from shallow image features. These factors contribute to many object detection networks that produce missed detections and false alarms, particularly for densely arranged objects and small objects. To address the above problems, this paper proposes a feature enhancement feedforward network (FEFN), based on a lightweight channel feedforward module (LCFM) and a feature enhancement module (FEM). First, the FEFN captures shallow spatial information in images through a lightweight channel feedforward module that can extract the edge information of small objects such as ships. Next, it enhances the feature interaction and representation by utilizing a feature enhancement module that can achieve more accurate detection results for densely arranged objects and small objects. Finally, comparative experiments on two publicly challenging remote sensing datasets demonstrate the effectiveness of the proposed method. © 2024 by the authors.

Keyword :

Feature extraction Feature extraction Image enhancement Image enhancement Military photography Military photography Object detection Object detection Object recognition Object recognition Remote sensing Remote sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Ni, Rixiang , Chen, Zhenhua et al. FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images [J]. | Remote Sensing , 2024 , 16 (13) .
MLA Wu, Jing et al. "FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images" . | Remote Sensing 16 . 13 (2024) .
APA Wu, Jing , Ni, Rixiang , Chen, Zhenhua , Huang, Feng , Chen, Liqiong . FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images . | Remote Sensing , 2024 , 16 (13) .
Export to NoteExpress RIS BibTex

Version :

Multi-Level Feature Fusion Network for Lightweight Stereo Image Super-Resolution CPCI-S
期刊论文 | 2024 , 6489-6498 | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW
Abstract&Keyword Cite Version(2)

Abstract :

Stereo image super-resolution utilizes the cross-view complementary information brought by the disparity effect of left and right perspective images to reconstruct higher-quality images. Cascading feature extraction modules and cross-view feature interaction modules to make use of the information from stereo images is the focus of numerous methods. However, this adds a great deal of network parameters and structural redundancy. To facilitate the application of stereo image super-resolution in downstream tasks, we propose an efficient Multi-Level Feature Fusion Network for Lightweight Stereo Image Super-Resolution (MFFSSR). Specifically, MFFSSR utilizes the Hybrid Attention Feature Extraction Block (HAFEB) to extract multi-level intra-view features. Using the channel separation strategy, HAFEB can efficiently interact with the embedded cross-view interaction module. This structural configuration can efficiently mine features inside the view while improving the efficiency of cross-view information sharing. Hence, reconstruct image details and textures more accurately. Abundant experiments demonstrate the effectiveness of MFFSSR. We achieve superior performance with fewer parameters. The source code is available at https:// github. com/KarosLYX/MFFSSR.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Li, Yunxiang , Zou, Wenbin , Wei, Qiaomu et al. Multi-Level Feature Fusion Network for Lightweight Stereo Image Super-Resolution [J]. | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW , 2024 : 6489-6498 .
MLA Li, Yunxiang et al. "Multi-Level Feature Fusion Network for Lightweight Stereo Image Super-Resolution" . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW (2024) : 6489-6498 .
APA Li, Yunxiang , Zou, Wenbin , Wei, Qiaomu , Huang, Feng , Wu, Jing . Multi-Level Feature Fusion Network for Lightweight Stereo Image Super-Resolution . | CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW , 2024 , 6489-6498 .
Export to NoteExpress RIS BibTex

Version :

Multi-Level Feature Fusion Network for Lightweight Stereo Image Super-Resolution EI
会议论文 | 2024 , 6489-6498
Multi-Level Feature Fusion Network for Lightweight Stereo Image Super-Resolution Scopus
其他 | 2024 , 6489-6498 | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints EI
期刊论文 | 2024 , 44 (20) | Acta Optica Sinica
Abstract&Keyword Cite Version(1)

Abstract :

Objective Infrared and visible light images exhibit significant differences in spectral properties due to their distinct imaging mechanisms. These differences often result in a high mismatch rate of feature points between the two types of images. Currently, widely used mismatch rejection algorithms, such as random sample consensus (RANSAC) and its variants, typically employ a strategy of random sampling combined with iterative optimization modeling for consistency fitting. However, when aligning heterogeneous images with high outlier rates, these methods often struggle to balance alignment accuracy and speed, leading to a high number of iterations or weak robustness. To address the relatively fixed positions of infrared and visible detectors in dual-modal imaging systems, we propose a spatial constraints priority sampling consensus (SC-PRISAC) algorithm. This algorithm leverages image space constraints to provide a robust inlier screening mechanism and an efficient sampling strategy, thus offering stable and reliable support for the fusion of infrared and visible image information. Methods In this study, a bispectral calibration target with both infrared and visible features is designed based on differences in material radiance. We achieve high-precision binocular camera calibration by accurately determining the internal and external parameters of the camera using a bilateral filtering pyramid. Based on this calibration, the spatial relationship between heterogeneous images is constructed using the epipolar constraint theorem and the principle of depth consistency. By implementing a priority sampling strategy based on the matching quality ranking of feature points, the number of iterations required by the algorithm is significantly reduced, allowing for precise and efficient elimination of mismatched feature points. Results and Discussions Our method’s calibration accuracy is assessed through the mean reprojection error (MRE), with comparative results presented in Table 1 and Fig. 7. The findings demonstrate a 58.2% improvement in calibration precision over the spot detection calibration technique provided by OpenCV, reducing the calibration error to 0.430 pixels. In the outlier rejection experiment, the progression of feature point matching across stages is detailed in Table 2. Following the introduction of spatial constraints, all valid matches are retained, and 27 outlier pairs are discarded. An additional 10 outlier pairs are further eliminated through preferential sampling strategies. To comprehensively evaluate the algorithm’s performance, several comparative methods, including RANSAC, degenerate sample consensus (DEGENSAC), MAGSAC++, graph-cut RANSAC (GC-RANSAC), Bayesian network for adaptive sample consensus (BANSAC), and a neural network-based ∇-RANSAC, are employed, with evaluations based on inlier counts, homography estimation errors, accuracy, and computational runtime as shown in Table 3 and Fig. 12. The proposed algorithm achieves a notably low homography estimation error of 7.857 with a runtime of just 1.919 ms, outperforming all comparative methods. This superior performance is primarily due to the SC-PRISAC algorithm’s robust spatial constraint mechanism, which effectively filters out outliers that contradict imaging principles, enabling more accurate sampling and fitting. In addition, the robustness of the proposed method and competing algorithms under complex scenarios is investigated by varying the proportion of outliers in initial datasets, as illustrated in Fig. 13. All algorithms perform satisfactorily when outlier ratios are below 45%. However, as the outlier ratio escalates, the precision of traditional methods like RANSAC deteriorates significantly. Remarkably, even at an extreme outlier ratio of 95%, SC-PRISAC maintains an accuracy rate of 70.2%, whereas other algorithms’accuracies drop to between 12% and 49%. These results highlight the significant advantage of the proposed method in scenarios with high mismatch rates, demonstrating its superior applicability and effectiveness in aligning infrared and visible light images under challenging conditions. Conclusions To address the challenge of high mismatch rates in infrared and visible image alignment, we propose an algorithm for rejecting mismatched feature points based on optimizing camera spatial relations. By designing a bispectral calibration target and improving the circular centroid positioning algorithm, sub-pixel-level infrared and visible binocular camera calibration is achieved, with the calibration error controlled within 0.430 pixel, significantly enhancing camera calibration accuracy. The algorithm integrates spatial constraints based on epipolar geometry and depth consistency to accurately exclude mismatched features that violate physical imaging laws and reduces computational complexity through an intelligent sampling strategy that prioritizes high-quality feature points. Experimental results show that the proposed method achieves a homography estimation error of 7.857, a processing speed of 1.919 ms, and maintains excellent performance even under high outlier ratios, outperforming other mismatched feature point rejection algorithms and proving its superior generalization and reliability in addressing infrared and visible image alignment problems. © 2024 Chinese Optical Society. All rights reserved.

Keyword :

Binoculars Binoculars Cameras Cameras Image enhancement Image enhancement Image matching Image matching Image registration Image registration Image sampling Image sampling Infrared detectors Infrared detectors Stereo image processing Stereo image processing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Lin, Ye , Chen, Haitao et al. Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints [J]. | Acta Optica Sinica , 2024 , 44 (20) .
MLA Shen, Ying et al. "Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints" . | Acta Optica Sinica 44 . 20 (2024) .
APA Shen, Ying , Lin, Ye , Chen, Haitao , Wu, Jing , Huang, Feng . Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints . | Acta Optica Sinica , 2024 , 44 (20) .
Export to NoteExpress RIS BibTex

Version :

Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints; [空 间 约 束 下 异 源 图 像 误 匹 配 特 征 点 剔 除 算 法] Scopus
期刊论文 | 2024 , 44 (20) | Acta Optica Sinica
Color-preserving visible and near-infrared image fusion for removing fog SCIE
期刊论文 | 2024 , 138 | INFRARED PHYSICS & TECHNOLOGY
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(2)

Abstract :

With the unavailability of scene depth information, single -sensor dehazing methods based on deep learning or prior information do not effectively work in dense foggy scenes. An effective approach is to remove the dense fog by fusing visible and near -infrared images. However, the current dehazing algorithms based on near -infrared and visible images experience color distortion and information loss. To overcome these challenges, we proposed a color -preserving dehazing method that fuses near -infrared and visible images by introducing a dataset (VN-Haze) of visible and near -infrared images captured under hazy conditions. A twostage image enhancement (TSE) method that can effectively rectify the color of visible images affected by fog was proposed to prevent the introduction of distorted color information. Furthermore, we proposed an adaptive luminance mapping (ALM) method to prevent color bias in fusion images caused by excessive differences in brightness between visible and near -infrared images that occur in vegetation areas. The proposed visiblepriority fusion strategy reasonably allocates weights for visible and near -infrared images, minimizing the loss of important features in visible images. Compared with existing dehazing algorithms, the proposed algorithm generates images with natural colors and less distortion and retains important visible information. Moreover, it demonstrates remarkable performance in objective evaluations.

Keyword :

Color preserving Color preserving Dense fog Dense fog Image dehazing Image dehazing Image fusion Image fusion Near-infrared Near-infrared

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Wei, Peng , Huang, Feng . Color-preserving visible and near-infrared image fusion for removing fog [J]. | INFRARED PHYSICS & TECHNOLOGY , 2024 , 138 .
MLA Wu, Jing et al. "Color-preserving visible and near-infrared image fusion for removing fog" . | INFRARED PHYSICS & TECHNOLOGY 138 (2024) .
APA Wu, Jing , Wei, Peng , Huang, Feng . Color-preserving visible and near-infrared image fusion for removing fog . | INFRARED PHYSICS & TECHNOLOGY , 2024 , 138 .
Export to NoteExpress RIS BibTex

Version :

Color-preserving visible and near-infrared image fusion for removing fog Scopus
期刊论文 | 2024 , 138 | Infrared Physics and Technology
Color-preserving visible and near-infrared image fusion for removing fog EI
期刊论文 | 2024 , 138 | Infrared Physics and Technology
Adaptive haze pixel intensity perception transformer structure for image dehazing networks SCIE
期刊论文 | 2024 , 14 (1) | SCIENTIFIC REPORTS
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

In the realm of deep learning-based networks for dehazing using paired clean-hazy image datasets to address complex real-world haze scenarios in daytime environments and cross-dataset challenges remains a significant concern due to algorithmic inefficiencies and color distortion. To tackle these issues, we propose SwinTieredHazymers (STH), a dehazing network designed to adaptively discern pixel intensities in hazy images and compute haze residue for clarity restoration. Through a unique three-branch design, we hierarchically modulate haze residuals by leveraging the global features brought by Transformer and the local features brought by Convolutional Neural Network (CNN) which has led to the algorithm's widespread applicability. Experimental results demonstrate that our approach surpasses advanced single-image dehazing methods in both quantitative metrics and visual fidelity for real-world hazy image dehazing, while also exhibiting strong performance in cross-dataset dehazing scenarios.

Keyword :

Adaptive haze pixel intensity perception Adaptive haze pixel intensity perception Image dehazing Image dehazing Multi CNN-Transformer layers Multi CNN-Transformer layers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Liu, Zhewei , Huang, Feng et al. Adaptive haze pixel intensity perception transformer structure for image dehazing networks [J]. | SCIENTIFIC REPORTS , 2024 , 14 (1) .
MLA Wu, Jing et al. "Adaptive haze pixel intensity perception transformer structure for image dehazing networks" . | SCIENTIFIC REPORTS 14 . 1 (2024) .
APA Wu, Jing , Liu, Zhewei , Huang, Feng , Luo, Rong . Adaptive haze pixel intensity perception transformer structure for image dehazing networks . | SCIENTIFIC REPORTS , 2024 , 14 (1) .
Export to NoteExpress RIS BibTex

Version :

Adaptive haze pixel intensity perception transformer structure for image dehazing networks
期刊论文 | 2024 , 14 (1) | Scientific Reports
Adaptive haze pixel intensity perception transformer structure for image dehazing networks Scopus
期刊论文 | 2024 , 14 (1) | Scientific Reports
Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net SCIE
期刊论文 | 2024 , 24 (14) | SENSORS
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Infrared images hold significant value in applications such as remote sensing and fire safety. However, infrared detectors often face the problem of high hardware costs, which limits their widespread use. Advancements in deep learning have spurred innovative approaches to image super-resolution (SR), but comparatively few efforts have been dedicated to the exploration of infrared images. To address this, we design the Residual Swin Transformer and Average Pooling Block (RSTAB) and propose the SwinAIR, which can effectively extract and fuse the diverse frequency features in infrared images and achieve superior SR reconstruction performance. By further integrating SwinAIR with U-Net, we propose the SwinAIR-GAN for real infrared image SR reconstruction. SwinAIR-GAN extends the degradation space to better simulate the degradation process of real infrared images. Additionally, it incorporates spectral normalization, dropout, and artifact discrimination loss to reduce the potential image artifacts. Qualitative and quantitative evaluations on various datasets confirm the effectiveness of our proposed method in reconstructing realistic textures and details of infrared images.

Keyword :

generative adversarial network generative adversarial network image super-resolution image super-resolution infrared image infrared image transformer transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Feng , Li, Yunxiang , Ye, Xiaojing et al. Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net [J]. | SENSORS , 2024 , 24 (14) .
MLA Huang, Feng et al. "Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net" . | SENSORS 24 . 14 (2024) .
APA Huang, Feng , Li, Yunxiang , Ye, Xiaojing , Wu, Jing . Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net . | SENSORS , 2024 , 24 (14) .
Export to NoteExpress RIS BibTex

Version :

Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net Scopus
期刊论文 | 2024 , 24 (14) | Sensors
Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net EI
期刊论文 | 2024 , 24 (14) | Sensors
FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images SCIE
期刊论文 | 2024 , 16 (13) | REMOTE SENSING
WoS CC Cited Count: 1
Abstract&Keyword Cite Version(2)

Abstract :

Object detection in remote sensing images has become a crucial component of computer vision. It has been employed in multiple domains, including military surveillance, maritime rescue, and military operations. However, the high density of small objects in remote sensing images makes it challenging for existing networks to accurately distinguish objects from shallow image features. These factors contribute to many object detection networks that produce missed detections and false alarms, particularly for densely arranged objects and small objects. To address the above problems, this paper proposes a feature enhancement feedforward network (FEFN), based on a lightweight channel feedforward module (LCFM) and a feature enhancement module (FEM). First, the FEFN captures shallow spatial information in images through a lightweight channel feedforward module that can extract the edge information of small objects such as ships. Next, it enhances the feature interaction and representation by utilizing a feature enhancement module that can achieve more accurate detection results for densely arranged objects and small objects. Finally, comparative experiments on two publicly challenging remote sensing datasets demonstrate the effectiveness of the proposed method.

Keyword :

channel feedforward channel feedforward feature enhancement feature enhancement object detection object detection remote sensing remote sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Ni, Rixiang , Chen, Zhenhua et al. FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images [J]. | REMOTE SENSING , 2024 , 16 (13) .
MLA Wu, Jing et al. "FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images" . | REMOTE SENSING 16 . 13 (2024) .
APA Wu, Jing , Ni, Rixiang , Chen, Zhenhua , Huang, Feng , Chen, Liqiong . FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images . | REMOTE SENSING , 2024 , 16 (13) .
Export to NoteExpress RIS BibTex

Version :

FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images EI
期刊论文 | 2024 , 16 (13) | Remote Sensing
FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images Scopus
期刊论文 | 2024 , 16 (13) | Remote Sensing
10| 20| 50 per page
< Page ,Total 5 >

Export

Results:

Selected

to

Format:
Online/Total:54/10049864
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1