• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship
Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 1 >
Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images SCIE CSCD
期刊论文 | 2024 , 34 , 269-281 | DEFENCE TECHNOLOGY
Abstract&Keyword Cite

Abstract :

Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment. Despite advancements in optical detection capabilities through imaging systems, including spectral, polarization, and infrared technologies, there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes. Here, this study proposes a snapshot multispectral image-based camouflaged detection model, multispectral YOLO (MS-YOLO), which utilizes the SPD-Conv and SimAM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information. Besides, the study constructs the first real-shot multispectral camouflaged people dataset (MSCPD), which encompasses diverse scenes, target scales, and attitudes. To minimize information redundancy, MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input. Through experiments on the MSCPD, MS-YOLO achieves a mean Average Precision of 94.31% and real-time detection at 65 frames per second, which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes. Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield. (c) 2023 China Ordnance Society. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co. Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/ licenses/by-nc-nd/4.0/).

Keyword :

Camouflaged people detection Camouflaged people detection Complex remote sensing scenes Complex remote sensing scenes MS-YOLO MS-YOLO Optimal band selection Optimal band selection Snapshot multispectral imaging Snapshot multispectral imaging

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Shu , Zeng, Dawei , Xu, Yixuan et al. Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images [J]. | DEFENCE TECHNOLOGY , 2024 , 34 : 269-281 .
MLA Wang, Shu et al. "Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images" . | DEFENCE TECHNOLOGY 34 (2024) : 269-281 .
APA Wang, Shu , Zeng, Dawei , Xu, Yixuan , Yang, Gonghan , Huang, Feng , Chen, Liqiong . Towards complex scenes: A deep learning-based camouflaged people detection method for snapshot multispectral images . | DEFENCE TECHNOLOGY , 2024 , 34 , 269-281 .
Export to NoteExpress RIS BibTex

Version :

Robust Unsupervised Multifeature Representation for Infrared Small Target Detection SCIE
期刊论文 | 2024 , 17 , 10306-10323 | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING
Abstract&Keyword Cite

Abstract :

Infrared small target detection is critical to infrared search and tracking systems. However, accurate and robust detection remains challenging due to the scarcity of target information and the complexity of clutter interference. Existing methods have some limitations in feature representation, leading to poor detection performance in complex scenes. Especially when there are sharp edges near the target or in cluster multitarget detection, the "target suppression" phenomenon tends to occur. To address this issue, we propose a robust unsupervised multifeature representation (RUMFR) method for infrared small target detection. On the one hand, robust unsupervised spatial clustering (RUSC) is designed to improve the accuracy of feature extraction; on the other hand, pixel-level multiple feature representation is proposed to fully utilize the target detail information. Specifically, we first propose the center-weighted interclass difference measure (CWIDM) with a trilayer design for fast candidate target extraction. Note that CWIDM also guides the parameter settings of RUSC. Then, the RUSC-based model is constructed to accurately extract target features in complex scenes. By designing the parameter adaptive strategy and iterative clustering strategy, RUSC can robustly segment cluster multitargets from complex backgrounds. Finally, RUMFR that fuses pixel-level contrast, distribution, and directional gradient features is proposed for better target representation and clutter suppression. Extensive experimental results show that our method has stronger feature representation capability and achieves better detection performance than several state-of-the-art methods.

Keyword :

Clutter Clutter Feature extraction Feature extraction Fuses Fuses Image edge detection Image edge detection Infrared small target detection Infrared small target detection Noise Noise Object detection Object detection pixel-level multifeature representation pixel-level multifeature representation robust unsupervised spatial clustering (RUSC) robust unsupervised spatial clustering (RUSC) Sparse matrices Sparse matrices "target suppression" phenomenon "target suppression" phenomenon

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Liqiong , Wu, Tong , Zheng, Shuyuan et al. Robust Unsupervised Multifeature Representation for Infrared Small Target Detection [J]. | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 : 10306-10323 .
MLA Chen, Liqiong et al. "Robust Unsupervised Multifeature Representation for Infrared Small Target Detection" . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 17 (2024) : 10306-10323 .
APA Chen, Liqiong , Wu, Tong , Zheng, Shuyuan , Qiu, Zhaobing , Huang, Feng . Robust Unsupervised Multifeature Representation for Infrared Small Target Detection . | IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING , 2024 , 17 , 10306-10323 .
Export to NoteExpress RIS BibTex

Version :

FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images EI
期刊论文 | 2024 , 16 (13) | Remote Sensing
Abstract&Keyword Cite

Abstract :

Object detection in remote sensing images has become a crucial component of computer vision. It has been employed in multiple domains, including military surveillance, maritime rescue, and military operations. However, the high density of small objects in remote sensing images makes it challenging for existing networks to accurately distinguish objects from shallow image features. These factors contribute to many object detection networks that produce missed detections and false alarms, particularly for densely arranged objects and small objects. To address the above problems, this paper proposes a feature enhancement feedforward network (FEFN), based on a lightweight channel feedforward module (LCFM) and a feature enhancement module (FEM). First, the FEFN captures shallow spatial information in images through a lightweight channel feedforward module that can extract the edge information of small objects such as ships. Next, it enhances the feature interaction and representation by utilizing a feature enhancement module that can achieve more accurate detection results for densely arranged objects and small objects. Finally, comparative experiments on two publicly challenging remote sensing datasets demonstrate the effectiveness of the proposed method. © 2024 by the authors.

Keyword :

Feature extraction Feature extraction Image enhancement Image enhancement Military photography Military photography Object detection Object detection Object recognition Object recognition Remote sensing Remote sensing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wu, Jing , Ni, Rixiang , Chen, Zhenhua et al. FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images [J]. | Remote Sensing , 2024 , 16 (13) .
MLA Wu, Jing et al. "FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images" . | Remote Sensing 16 . 13 (2024) .
APA Wu, Jing , Ni, Rixiang , Chen, Zhenhua , Huang, Feng , Chen, Liqiong . FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images . | Remote Sensing , 2024 , 16 (13) .
Export to NoteExpress RIS BibTex

Version :

Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images
期刊论文 | 2024 , 34 (4) , 269-281 | 防务技术
Abstract&Keyword Cite

Abstract :

Camouflaged people are extremely expert in actively concealing themselves by effectively utilizing cover and the surrounding environment.Despite advancements in optical detection capabilities through im-aging systems,including spectral,polarization,and infrared technologies,there is still a lack of effective real-time method for accurately detecting small-size and high-efficient camouflaged people in complex real-world scenes.Here,this study proposes a snapshot multispectral image-based camouflaged detection model,multispectral YOLO(MS-YOLO),which utilizes the SPD-Conv and SimAM modules to effectively represent targets and suppress background interference by exploiting the spatial-spectral target information.Besides,the study constructs the first real-shot multispectral camouflaged people dataset(MSCPD),which encompasses diverse scenes,target scales,and attitudes.To minimize infor-mation redundancy,MS-YOLO selects an optimal subset of 12 bands with strong feature representation and minimal inter-band correlation as input.Through experiments on the MSCPD,MS-YOLO achieves a mean Average Precision of 94.31%and real-time detection at 65 frames per second,which confirms the effectiveness and efficiency of our method in detecting camouflaged people in various typical desert and forest scenes.Our approach offers valuable support to improve the perception capabilities of unmanned aerial vehicles in detecting enemy forces and rescuing personnel in battlefield.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shu Wang , Dawei Zeng , Yixuan Xu et al. Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images [J]. | 防务技术 , 2024 , 34 (4) : 269-281 .
MLA Shu Wang et al. "Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images" . | 防务技术 34 . 4 (2024) : 269-281 .
APA Shu Wang , Dawei Zeng , Yixuan Xu , Gonghan Yang , Feng Huang , Liqiong Chen . Towards complex scenes:A deep learning-based camouflaged people detection method for snapshot multispectral images . | 防务技术 , 2024 , 34 (4) , 269-281 .
Export to NoteExpress RIS BibTex

Version :

RSHAN: Image super-resolution network based on residual separation hybrid attention module SCIE
期刊论文 | 2023 , 122 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
WoS CC Cited Count: 8
Abstract&Keyword Cite

Abstract :

Transformer has become one of the main architectures in deep learning, showing impressive performance in various vision tasks, especially for image super-resolution (SR). However, due to the usage of high -resolution input images, most current Transformer-based image super-resolution models have a large number of parameters and high computational complexity. Moreover, some components employed in the Transformer may be redundant for SR tasks, which may limit the SR performance. In this work, we propose an efficient and concise model for image super-resolution tasks termed Residual Separation Hybrid Attention Network (RSHAN), which aims to solve the problems of redundant components and insufficient ability to extract high-frequency information of Transformer. Specifically, we present the Residual Separation Hybrid Attention Module (RSHAM) which fuses the local features extracted by the convolutional neural network (CNN) branch and the long-range dependencies extracted by Transformers to improve the performance of RSHAN. Extensive experiments on numerous benchmark datasets show that the proposed method outperforms state-of-the-art SR methods by up to 0.11 dB in peak signal-to-noise ratio (PSNR) metric, while the computational complexity and the inference time is reduced by 5% and 10%, respectively.

Keyword :

Convolutional neural network Convolutional neural network Image super-resolution Image super-resolution Residual separation hybrid attention module Residual separation hybrid attention module Transformer Transformer

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Zheng, Weihuang , Chen, Liqiong et al. RSHAN: Image super-resolution network based on residual separation hybrid attention module [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 122 .
MLA Shen, Ying et al. "RSHAN: Image super-resolution network based on residual separation hybrid attention module" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 122 (2023) .
APA Shen, Ying , Zheng, Weihuang , Chen, Liqiong , Huang, Feng . RSHAN: Image super-resolution network based on residual separation hybrid attention module . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 122 .
Export to NoteExpress RIS BibTex

Version :

Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution SCIE
期刊论文 | 2023 , 23 (8) | SENSORS
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Deployment of deep convolutional neural networks (CNNs) in single image super-resolution (SISR) for edge computing devices is mainly hampered by the huge computational cost. In this work, we propose a lightweight image super-resolution (SR) network based on a reparameterizable multibranch bottleneck module (RMBM). In the training phase, RMBM efficiently extracts high-frequency information by utilizing multibranch structures, including bottleneck residual block (BRB), inverted bottleneck residual block (IBRB), and expand-squeeze convolution block (ESB). In the inference phase, the multibranch structures can be combined into a single 3 x 3 convolution to reduce the number of parameters without incurring any additional computational cost. Furthermore, a novel peak-structure-edge (PSE) loss is proposed to resolve the problem of oversmoothed reconstructed images while significantly improving image structure similarity. Finally, we optimize and deploy the algorithm on the edge devices equipped with the rockchip neural processor unit (RKNPU) to achieve real-time SR reconstruction. Extensive experiments on natural image datasets and remote sensing image datasets show that our network outperforms advanced lightweight SR networks regarding objective evaluation metrics and subjective vision quality. The reconstruction results demonstrate that the proposed network can achieve higher SR performance with a 98.1 K model size, which can be effectively deployed to edge computing devices.

Keyword :

edge computing device edge computing device lightweight image super-resolution lightweight image super-resolution PSE loss PSE loss reparameterizable multibranch bottleneck module reparameterizable multibranch bottleneck module

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Zheng, Weihuang , Huang, Feng et al. Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution [J]. | SENSORS , 2023 , 23 (8) .
MLA Shen, Ying et al. "Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution" . | SENSORS 23 . 8 (2023) .
APA Shen, Ying , Zheng, Weihuang , Huang, Feng , Wu, Jing , Chen, Liqiong . Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution . | SENSORS , 2023 , 23 (8) .
Export to NoteExpress RIS BibTex

Version :

基于深度学习的单帧图像超分辨率重建综述 CSCD PKU
期刊论文 | 2022 , 50 (9) , 2265-2294 | 电子学报
Abstract&Keyword Cite

Abstract :

图像超分辨率重建是计算机视觉中的基本图像处理技术之一,不仅可以提高图像分辨率改善图像质量,还可以辅助其他计算机视觉任务.近年来,随着人工智能浪潮的兴起,基于深度学习的图像超分辨率重建也取得了显著进展.本文在简述图像超分辨率重建方法的基础上,全面综述了基于深度学习的单帧图像超分辨率重建的技术架构及研究历程,包括数据集构建方式、网络模型基本框架以及用于图像质量评估的主、客观评价指标,重点介绍了根据网络结构及图像重建效果划分的基于卷积神经网络的方法、基于生成对抗网络的方法以及基于Transformer的方法,并对相关网络模型加以评述和对比,最后依据网络模型和超分辨率重建挑战赛相关内容,展望了图像超分辨率重建未来的发展趋势.

Keyword :

Transformer Transformer 单帧图像 单帧图像 卷积神经网络 卷积神经网络 挑战赛 挑战赛 深度学习 深度学习 生成对抗网络 生成对抗网络 超分辨率重建 超分辨率重建

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 吴靖 , 叶晓晶 , 黄峰 et al. 基于深度学习的单帧图像超分辨率重建综述 [J]. | 电子学报 , 2022 , 50 (9) : 2265-2294 .
MLA 吴靖 et al. "基于深度学习的单帧图像超分辨率重建综述" . | 电子学报 50 . 9 (2022) : 2265-2294 .
APA 吴靖 , 叶晓晶 , 黄峰 , 陈丽琼 , 王志锋 , 刘文犀 . 基于深度学习的单帧图像超分辨率重建综述 . | 电子学报 , 2022 , 50 (9) , 2265-2294 .
Export to NoteExpress RIS BibTex

Version :

Residual Triplet Attention Network for Single-Image Super-Resolution SCIE
期刊论文 | 2021 , 10 (17) | ELECTRONICS
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Single-image super-resolution (SISR) techniques have been developed rapidly with the remarkable progress of convolutional neural networks (CNNs). The previous CNNs-based SISR techniques mainly focus on the network design while ignoring the interactions and interdependencies between different dimensions of the features in the middle layers, consequently hindering the powerful learning ability of CNNs. In order to address this problem effectively, a residual triplet attention network (RTAN) for efficient interactions of the feature information is proposed. Specifically, we develop an innovative multiple-nested residual group (MNRG) structure to improve the learning ability for extracting the high-frequency information and train a deeper and more stable network. Furthermore, we present a novel lightweight residual triplet attention module (RTAM) to obtain the cross-dimensional attention weights of the features. The RTAM combines two cross-dimensional interaction blocks (CDIBs) and one spatial attention block (SAB) base on the residual module. Therefore, the RTAM is not only capable of capturing the cross-dimensional interactions and interdependencies of the features, but also utilizing the spatial information of the features. The simulation results and analysis show the superiority of the proposed RTAN over the state-of-the-art SISR networks in terms of both evaluation metrics and visual results.

Keyword :

attention mechanism attention mechanism convolutional neural networks convolutional neural networks deep learning deep learning image super-resolution image super-resolution

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Feng , Wang, Zhifeng , Wu, Jing et al. Residual Triplet Attention Network for Single-Image Super-Resolution [J]. | ELECTRONICS , 2021 , 10 (17) .
MLA Huang, Feng et al. "Residual Triplet Attention Network for Single-Image Super-Resolution" . | ELECTRONICS 10 . 17 (2021) .
APA Huang, Feng , Wang, Zhifeng , Wu, Jing , Shen, Ying , Chen, Liqiong . Residual Triplet Attention Network for Single-Image Super-Resolution . | ELECTRONICS , 2021 , 10 (17) .
Export to NoteExpress RIS BibTex

Version :

Camouflaged Target Detection Based on Snapshot Multispectral Imaging SCIE
期刊论文 | 2021 , 13 (19) | REMOTE SENSING
Abstract&Keyword Cite

Abstract :

The spectral information contained in the hyperspectral images (HSI) distinguishes the intrinsic properties of a target from the background, which is widely used in remote sensing. However, the low imaging speed and high data redundancy caused by the high spectral resolution of imaging spectrometers limit their application in scenarios with the real-time requirement. In this work, we achieve the precise detection of camouflaged targets based on snapshot multispectral imaging technology and band selection methods in urban-related scenes. Specifically, the camouflaged target detection algorithm combines the constrained energy minimization (CEM) algorithm and the improved maximum between-class variance (OTSU) algorithm (t-OTSU), which is proposed to obtain the initial target detection results and adaptively segment the target region. Moreover, an object region extraction (ORE) algorithm is proposed to obtain a complete target contour that improves the target detection capability of multispectral images (MSI). The experimental results show that the proposed algorithm has the ability to detect different camouflaged targets by using only four bands. The detection accuracy is above 99%, and the false alarm rate is below 0.2%. The research achieves the effective detection of camouflaged targets and has the potential to provide a new means for real-time multispectral sensing in complex scenes.

Keyword :

camouflaged target detection camouflaged target detection CEM CEM OTSU OTSU snapshot multispectral imaging snapshot multispectral imaging urban object-analysis urban object-analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Shen, Ying , Li, Jie , Lin, Wenfu et al. Camouflaged Target Detection Based on Snapshot Multispectral Imaging [J]. | REMOTE SENSING , 2021 , 13 (19) .
MLA Shen, Ying et al. "Camouflaged Target Detection Based on Snapshot Multispectral Imaging" . | REMOTE SENSING 13 . 19 (2021) .
APA Shen, Ying , Li, Jie , Lin, Wenfu , Chen, Liqiong , Huang, Feng , Wang, Shu . Camouflaged Target Detection Based on Snapshot Multispectral Imaging . | REMOTE SENSING , 2021 , 13 (19) .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 1 >

Export

Results:

Selected

to

Format:
Online/Total:376/7275344
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1