Indexed by:
Abstract:
Infrared and visible image fusion aims to produce fused images which retain rich texture details and high pixel intensity in the source images. In this article, we propose a dual-attention-based feature aggregation network for infrared and visible image fusion. Specifically, we first design a multibranch channel-attention-based feature aggregation block (MBCA) by generating multiple branches to suppress useless features from different aspects. This block is also able to adaptively aggregate the meaningful features by exploiting the interdependencies between channel features. To gather more meaningful features during the fusion process, we further design a global-local spatial-attention-based feature aggregation block (GLSA), for progressively integrating features of source images. After that, we introduce multiscale structural similarity (MS-SSIM) as loss function to evaluate the structural differences between the fused image and the source images from multiple scales. In addition, the proposed network involves strong generalization ability since our fusion model is trained on the RoadScene dataset and tested directly on the TNO and MSRS datasets. Extensive experiments on these datasets demonstrate the superiority of our network compared with the current state-of-the-art methods. The source code will be released at https://github.com/tangjunyang/Dualattention.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT
ISSN: 0018-9456
Year: 2023
Volume: 72
5 . 6
JCR@2023
5 . 6 0 0
JCR@2023
ESI Discipline: ENGINEERING;
ESI HC Threshold:35
JCR Journal Grade:1
CAS Journal Grade:2
Cited Count:
WoS CC Cited Count: 9
SCOPUS Cited Count: 11
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: