• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:吴靖

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 5 >
DFINet: Dynamic feedback iterative network for infrared small target detection SCIE
期刊论文 | 2026 , 169 | PATTERN RECOGNITION
Abstract&Keyword Cite

Abstract :

Recently, deep learning-based methods have made impressive progress in infrared small target detection (IRSTD). However, the weak and variable nature of small targets constrains the feature extraction and scene adaptation of existing methods, leading to low data utilization and poor robustness. To address this issue, we innovatively introduce the feedback mechanism into IRSTD and propose the dynamic feedback iterative network (DFINet). The main motivation is to guide the model training and prediction utilizing the history prediction mask (HPMK) of previous rounds. On the one hand, in the training phase, DFINet can further mine the key features of real targets by training in multiple iterations with limited data; on the other hand, in the prediction phase, DFINet can correct the wrong results through feedback iterative to improve the model robustness. Specifically, we first propose the dynamic feedback feature fusion module (DFFFM), which dynamically interacts HPMK with feature maps through a hard attention mechanism to guide feature mining and error correction. Then, for better feature extraction, the cascaded hybrid pyramid pooling module (CHPP) is devised to capture both global and local information. Finally, we propose the dynamic semantic fusion module (DSFM), which innovatively utilizes feedback information to guide the fusion of high-level and low-level features for better feature representation in different scenarios. Extensive experimental results on publicly available datasets of NUDT-SIRST, IRSTD-1k, and SIRST Aug show that DFINet outperforms several state-of-the-art methods and achieves superior detection performance. Our code will be publicly available at https://github.com/uisdu/DFINet.

Keyword :

Error correction Error correction Feature mining Feature mining Feedback iteration Feedback iteration Infrared small target detection Infrared small target detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Luo, Changhai , Qiu, Zhaobing et al. DFINet: Dynamic feedback iterative network for infrared small target detection [J]. | PATTERN RECOGNITION , 2026 , 169 .
MLA Wu, Jing et al. "DFINet: Dynamic feedback iterative network for infrared small target detection" . | PATTERN RECOGNITION 169 (2026) .
APA Wu, Jing , Luo, Changhai , Qiu, Zhaobing , Chen, Liqiong , Ni, Rixiang , Li, Yunxiang et al. DFINet: Dynamic feedback iterative network for infrared small target detection . | PATTERN RECOGNITION , 2026 , 169 .
Export to NoteExpress RIS BibTex

Version :

EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection SCIE
期刊论文 | 2025 , 145 | INFRARED PHYSICS & TECHNOLOGY
Abstract&Keyword Cite

Abstract :

The pedestrian detection network utilizing a combination of infrared and visible image pairs can improve detection accuracy by fusing their complementary information, especially in challenging illumination conditions. However, most existing dual-modality methods only focus on the effectiveness of feature maps between different modalities while neglecting the issue of redundant information in the modalities. This oversight often affects the detection performance in low illumination conditions. This paper proposes an efficient attention feature fusion network (EAFF-Net), which suppresses redundant information and enhances the fusion of features from dualmodality images. Firstly, we design a dual-backbone network based on CSPDarknet53 and combine with an efficient partial spatial pyramid pooling module (EPSPPM), improving the efficiency of feature extraction in different modalities. Secondly, a feature attention fusion module (FAFM) is built to adaptively weaken modal redundant information to improve the fusion effect of features. Finally, a deep attention pyramid module (DAPM) is proposed to cascade multi-scale feature information and obtain more detailed features of small targets. The effectiveness of EAFF-Net in pedestrian detection has been demonstrated through experiments conducted on two public datasets.

Keyword :

Deep learning Deep learning Feature attention Feature attention Multiscale features Multiscale features Pedestrian detection Pedestrian detection Visible and infrared images Visible and infrared images

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Xie, Xiaoyang , Wu, Jing et al. EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection [J]. | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 .
MLA Shen, Ying et al. "EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection" . | INFRARED PHYSICS & TECHNOLOGY 145 (2025) .
APA Shen, Ying , Xie, Xiaoyang , Wu, Jing , Chen, Liqiong , Huang, Feng . EAFF-Net: Efficient attention feature fusion network for dual-modality pedestrian detection . | INFRARED PHYSICS & TECHNOLOGY , 2025 , 145 .
Export to NoteExpress RIS BibTex

Version :

Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Infrared small target detection (IRSTD) plays a vital role in various fields, especially in military early warning and maritime rescue. Its main goal is to accurately locate targets at long distances. Current deep learning (DL)-based methods mainly rely on mask-to-mask or box-to-box regression training approaches, making considerable progress in detection accuracy. However, these methods rely on large amounts of training data with expensive manual annotation. Although some researchers attempt to reduce the cost using single-point weak supervision (SPWS), the limited labeling accuracy significantly degrades the detection performance. To address these issues, we propose a novel point-to-point regression high-resolution dynamic network (P2P-HDNet), which can accurately locate the target center using only single-point annotation. Specifically, we first devise the high-resolution cross-feature extraction module (HCEM) to provide richer target detail information for the deep feature maps. Notably, HCEM maintains high resolution throughout the feature extraction process to minimize information loss. Then, the dynamic coordinate fusion module (DCFM) is devised to fully fuse the multidimensional features and enhance the positional sensitivity. Finally, we devise an adaptive target localization detection head (ATLDH) to further suppress clutter and improve the localization accuracy by regressing the Gaussian heatmap and adaptive nonmaximal suppression strategy. Extensive experimental results show that P2P-HDNet can achieve better detection accuracy than the state-of-the-art (SOTA) methods with only single-point annotation. In addition, our code and datasets will be available at: https://github.com/Anton-Nrx/P2P-HDNet.

Keyword :

Dynamic feature attention mechanism Dynamic feature attention mechanism high-resolution feature extraction high-resolution feature extraction infrared small target detection (IRSTD) infrared small target detection (IRSTD) point-to-point regression (P2PR) point-to-point regression (P2PR) single-point supervision single-point supervision

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ni, Rixiang , Wu, Jing , Qiu, Zhaobing et al. Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Ni, Rixiang et al. "Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Ni, Rixiang , Wu, Jing , Qiu, Zhaobing , Chen, Liqiong , Luo, Changhai , Huang, Feng et al. Point-to-Point Regression: Accurate Infrared Small Target Detection With Single-Point Annotation . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

STAMF: Synergistic transformer and mamba fusion network for RGB-Polarization based underwater salient object detection SCIE
期刊论文 | 2025 , 122 | INFORMATION FUSION
Abstract&Keyword Cite

Abstract :

The quality of underwater imaging is severely compromised due to the light scattering and absorption caused by suspended particles, limiting the effectiveness of following underwater salient object detection (USOD) tasks. Polarization information offers a unique perspective by interpreting the intrinsic physical properties of objects, potentially enhancing the contrast between objects and background in complex scenes. However, it is rarely applied in the field of USOD. In this paper, we build a dataset named TJUP-USOD, which includes both RGB and polarization (i,e., RGB-P) images; based on this, we design a USOD network, called STAMF, to explore the strengths of both color and polarization information. STAMF synthesizes these complementary information streams to generate high-contrast, vivid scene representations that improve the discernibility of underwater features. Specifically, the Omnidirectional Tokens-to-Token Vision Mamba notably amplifies the capacity to handle both global and local information by employing multidirectional scanning and iterative integration of inputs. Besides, introducing the Mamba Cross-Modal Fusion Module adeptly merges RGB and polarization features, amalgamating global insights to refine local pixel-wise fusion precision and alleviate overall misguidance resulting from the fusion of erroneous modal data in demanding underwater environments. Comparative experiments with 27 methods and extensive ablation study results demonstrate that, the proposed STAMF, with only 25.85 million parameters, effectively leverages RGB-P information, achieving state-of-the-art performance, and opens a new door for the USOD tasks. The proposed STAMF once again demonstrates the importance of increasing the dimensionality of the dataset for USOD; and further exploring the advantages of network structures based on multi-dimensional data will further enhance task performance. The code and dataset are publicly available: https://github.com/Kingwin97/STAMF.

Keyword :

Dataset Dataset Mamba Mamba Polarimetric imaging Polarimetric imaging Transformer Transformer Underwater salient object detection Underwater salient object detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ma, Qianwen , Li, Xiaobo , Li, Bincheng et al. STAMF: Synergistic transformer and mamba fusion network for RGB-Polarization based underwater salient object detection [J]. | INFORMATION FUSION , 2025 , 122 .
MLA Ma, Qianwen et al. "STAMF: Synergistic transformer and mamba fusion network for RGB-Polarization based underwater salient object detection" . | INFORMATION FUSION 122 (2025) .
APA Ma, Qianwen , Li, Xiaobo , Li, Bincheng , Zhu, Zhen , Wu, Jing , Huang, Feng et al. STAMF: Synergistic transformer and mamba fusion network for RGB-Polarization based underwater salient object detection . | INFORMATION FUSION , 2025 , 122 .
Export to NoteExpress RIS BibTex

Version :

PFAN: progressive feature aggregation network for lightweight image super-resolution SCIE
期刊论文 | 2025 , 41 (11) , 8431-8450 | VISUAL COMPUTER
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Image super-resolution (SR) has recently gained traction in various fields, including remote sensing, biomedicine, and video surveillance. Nonetheless, the majority of advancements in SR have been achieved by scaling the architecture of convolutional neural networks, which inevitably increases computational complexity. In addition, most existing SR models struggle to effectively capture high-frequency information, resulting in overly smooth reconstructed images. To address this issue, we propose a lightweight Progressive Feature Aggregation Network (PFAN), which leverages Progressive Feature Aggregation Block to enhance different features through a progressive strategy. Specifically, we propose a Key Information Perception Module for capturing high-frequency details from cross-spatial-channel dimension to recover edge features. Besides, we design a Local Feature Enhancement Module, which effectively combines multi-scale convolutions for local feature extraction and Transformer for long-range dependencies modeling. Through the progressive fusion of rich edge details and texture features, our PFAN successfully achieves better reconstruction performance. Extensive experiments on five benchmark datasets demonstrate that PFAN outperforms state-of-the-art methods and strikes a better balance across SR performance, parameters, and computational complexity. Code can be available at https://github.com/handsomeyxk/PFAN.

Keyword :

CNN CNN Key information perception Key information perception Local feature enhancement Local feature enhancement Progressive feature aggregation network Progressive feature aggregation network Super-resolution Super-resolution Transformer Transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Liqiong , Yang, Xiangkun , Wang, Shu et al. PFAN: progressive feature aggregation network for lightweight image super-resolution [J]. | VISUAL COMPUTER , 2025 , 41 (11) : 8431-8450 .
MLA Chen, Liqiong et al. "PFAN: progressive feature aggregation network for lightweight image super-resolution" . | VISUAL COMPUTER 41 . 11 (2025) : 8431-8450 .
APA Chen, Liqiong , Yang, Xiangkun , Wang, Shu , Shen, Ying , Wu, Jing , Huang, Feng et al. PFAN: progressive feature aggregation network for lightweight image super-resolution . | VISUAL COMPUTER , 2025 , 41 (11) , 8431-8450 .
Export to NoteExpress RIS BibTex

Version :

Color-preserving visible and near-infrared image fusion for removing fog SCIE
期刊论文 | 2024 , 138 | INFRARED PHYSICS & TECHNOLOGY
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

With the unavailability of scene depth information, single -sensor dehazing methods based on deep learning or prior information do not effectively work in dense foggy scenes. An effective approach is to remove the dense fog by fusing visible and near -infrared images. However, the current dehazing algorithms based on near -infrared and visible images experience color distortion and information loss. To overcome these challenges, we proposed a color -preserving dehazing method that fuses near -infrared and visible images by introducing a dataset (VN-Haze) of visible and near -infrared images captured under hazy conditions. A twostage image enhancement (TSE) method that can effectively rectify the color of visible images affected by fog was proposed to prevent the introduction of distorted color information. Furthermore, we proposed an adaptive luminance mapping (ALM) method to prevent color bias in fusion images caused by excessive differences in brightness between visible and near -infrared images that occur in vegetation areas. The proposed visiblepriority fusion strategy reasonably allocates weights for visible and near -infrared images, minimizing the loss of important features in visible images. Compared with existing dehazing algorithms, the proposed algorithm generates images with natural colors and less distortion and retains important visible information. Moreover, it demonstrates remarkable performance in objective evaluations.

Keyword :

Color preserving Color preserving Dense fog Dense fog Image dehazing Image dehazing Image fusion Image fusion Near-infrared Near-infrared

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Wei, Peng , Huang, Feng . Color-preserving visible and near-infrared image fusion for removing fog [J]. | INFRARED PHYSICS & TECHNOLOGY , 2024 , 138 .
MLA Wu, Jing et al. "Color-preserving visible and near-infrared image fusion for removing fog" . | INFRARED PHYSICS & TECHNOLOGY 138 (2024) .
APA Wu, Jing , Wei, Peng , Huang, Feng . Color-preserving visible and near-infrared image fusion for removing fog . | INFRARED PHYSICS & TECHNOLOGY , 2024 , 138 .
Export to NoteExpress RIS BibTex

Version :

Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net SCIE
期刊论文 | 2024 , 24 (14) | SENSORS
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Infrared images hold significant value in applications such as remote sensing and fire safety. However, infrared detectors often face the problem of high hardware costs, which limits their widespread use. Advancements in deep learning have spurred innovative approaches to image super-resolution (SR), but comparatively few efforts have been dedicated to the exploration of infrared images. To address this, we design the Residual Swin Transformer and Average Pooling Block (RSTAB) and propose the SwinAIR, which can effectively extract and fuse the diverse frequency features in infrared images and achieve superior SR reconstruction performance. By further integrating SwinAIR with U-Net, we propose the SwinAIR-GAN for real infrared image SR reconstruction. SwinAIR-GAN extends the degradation space to better simulate the degradation process of real infrared images. Additionally, it incorporates spectral normalization, dropout, and artifact discrimination loss to reduce the potential image artifacts. Qualitative and quantitative evaluations on various datasets confirm the effectiveness of our proposed method in reconstructing realistic textures and details of infrared images.

Keyword :

generative adversarial network generative adversarial network image super-resolution image super-resolution infrared image infrared image transformer transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Feng , Li, Yunxiang , Ye, Xiaojing et al. Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net [J]. | SENSORS , 2024 , 24 (14) .
MLA Huang, Feng et al. "Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net" . | SENSORS 24 . 14 (2024) .
APA Huang, Feng , Li, Yunxiang , Ye, Xiaojing , Wu, Jing . Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net . | SENSORS , 2024 , 24 (14) .
Export to NoteExpress RIS BibTex

Version :

FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images SCIE
期刊论文 | 2024 , 16 (13) | REMOTE SENSING
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

Object detection in remote sensing images has become a crucial component of computer vision. It has been employed in multiple domains, including military surveillance, maritime rescue, and military operations. However, the high density of small objects in remote sensing images makes it challenging for existing networks to accurately distinguish objects from shallow image features. These factors contribute to many object detection networks that produce missed detections and false alarms, particularly for densely arranged objects and small objects. To address the above problems, this paper proposes a feature enhancement feedforward network (FEFN), based on a lightweight channel feedforward module (LCFM) and a feature enhancement module (FEM). First, the FEFN captures shallow spatial information in images through a lightweight channel feedforward module that can extract the edge information of small objects such as ships. Next, it enhances the feature interaction and representation by utilizing a feature enhancement module that can achieve more accurate detection results for densely arranged objects and small objects. Finally, comparative experiments on two publicly challenging remote sensing datasets demonstrate the effectiveness of the proposed method.

Keyword :

channel feedforward channel feedforward feature enhancement feature enhancement object detection object detection remote sensing remote sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Ni, Rixiang , Chen, Zhenhua et al. FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images [J]. | REMOTE SENSING , 2024 , 16 (13) .
MLA Wu, Jing et al. "FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images" . | REMOTE SENSING 16 . 13 (2024) .
APA Wu, Jing , Ni, Rixiang , Chen, Zhenhua , Huang, Feng , Chen, Liqiong . FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images . | REMOTE SENSING , 2024 , 16 (13) .
Export to NoteExpress RIS BibTex

Version :

Adaptive haze pixel intensity perception transformer structure for image dehazing networks SCIE
期刊论文 | 2024 , 14 (1) | SCIENTIFIC REPORTS
WoS CC Cited Count: 1
Abstract&Keyword Cite

Abstract :

In the realm of deep learning-based networks for dehazing using paired clean-hazy image datasets to address complex real-world haze scenarios in daytime environments and cross-dataset challenges remains a significant concern due to algorithmic inefficiencies and color distortion. To tackle these issues, we propose SwinTieredHazymers (STH), a dehazing network designed to adaptively discern pixel intensities in hazy images and compute haze residue for clarity restoration. Through a unique three-branch design, we hierarchically modulate haze residuals by leveraging the global features brought by Transformer and the local features brought by Convolutional Neural Network (CNN) which has led to the algorithm's widespread applicability. Experimental results demonstrate that our approach surpasses advanced single-image dehazing methods in both quantitative metrics and visual fidelity for real-world hazy image dehazing, while also exhibiting strong performance in cross-dataset dehazing scenarios.

Keyword :

Adaptive haze pixel intensity perception Adaptive haze pixel intensity perception Image dehazing Image dehazing Multi CNN-Transformer layers Multi CNN-Transformer layers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Liu, Zhewei , Huang, Feng et al. Adaptive haze pixel intensity perception transformer structure for image dehazing networks [J]. | SCIENTIFIC REPORTS , 2024 , 14 (1) .
MLA Wu, Jing et al. "Adaptive haze pixel intensity perception transformer structure for image dehazing networks" . | SCIENTIFIC REPORTS 14 . 1 (2024) .
APA Wu, Jing , Liu, Zhewei , Huang, Feng , Luo, Rong . Adaptive haze pixel intensity perception transformer structure for image dehazing networks . | SCIENTIFIC REPORTS , 2024 , 14 (1) .
Export to NoteExpress RIS BibTex

Version :

空间约束下异源图像误匹配特征点剔除算法
期刊论文 | 2024 , 44 (20) , 208-219 | 光学学报
Abstract&Keyword Cite

Abstract :

红外与可见光图像因其显著的光谱特性差异,在配准过程中易出现特征点误匹配率高的问题.当前广泛应用的误匹配剔除算法通常采用随机采样结合模型拟合的策略,这类方法往往难以兼顾配准精度和速度,表现为算法迭代次数过高或鲁棒性不强.针对这一问题,提出一种基于空间约束的优先采样一致性(SC-PRISAC)误匹配剔除算法.利用材料辐射率差异设计兼具红外与可见光特征的双光谱标定靶标,基于双边滤波金字塔标定获取相机内外参数,在此基础上利用极线约束定理和深度一致性原则构建异源图像间的空间约束关系.使用高质量特征点优先采样策略减少了算法的迭代次数,有效剔除误匹配特征点.实验表明:所提算法实现了亚像素红外与可见光双目标定,标定误差降低至0.430 pixel;在提高配准精度的同时,也有效提升了处理速度,单应性矩阵估计误差为7.857,处理时间仅为1.919 ms,各项性能均优于RANSAC(random sample consensus)等算法.所提算法为红外与可见光图像配准提供一种更为可靠和高效的误匹配剔除解决方案.

Keyword :

双目标定 双目标定 图像配准 图像配准 极线约束 极线约束 误匹配特征点剔除 误匹配特征点剔除

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 沈英 , 林烨 , 陈海涛 et al. 空间约束下异源图像误匹配特征点剔除算法 [J]. | 光学学报 , 2024 , 44 (20) : 208-219 .
MLA 沈英 et al. "空间约束下异源图像误匹配特征点剔除算法" . | 光学学报 44 . 20 (2024) : 208-219 .
APA 沈英 , 林烨 , 陈海涛 , 吴靖 , 黄峰 . 空间约束下异源图像误匹配特征点剔除算法 . | 光学学报 , 2024 , 44 (20) , 208-219 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 5 >

Export

Results:

Selected

to

Format:
Online/Total:581/13572865
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1