Indexed by:
Abstract:
Inadequate exposure of imaging devices in low-light environments results in a loss of image information, significantly deteriorating the image quality. However, current low-light image enhancement algorithms commonly suffer from issues such as color distortion and loss of fine details and textures. In this paper, we propose a frequency-guided dual-collapse Transformer (FDCFormer) network. First, in response to color distortion after enhancement, we propose a dual-collapse Transformer that effectively aggregates features from both spatial and channel dimensions, thus capturing global information. Subsequently, because relying solely on enhancement in the spatial domain often makes it difficult to preserve fine details and textures, we design multiple mixed residual fast Fourier transform blocks as additional frequency information guidance branches, focusing on local detail information at the image edges. Additionally, we employ an adaptive dual-domain information fusion module that combines spatial domain and frequency domain information to enrich the final output features. Extensive experiments on multiple publicly available datasets demonstrate that our FDCFormer outperforms state-of-the-art methods, exceeding Retinexformer by up to 0.93 dB on average across five paired datasets. We also employ our method as a preprocessing step in dark detection, our method improves mean average precision (mAP) by 1.9% over the baseline model on ExDark dataset, revealing the latent practical values of our method. The corresponding codes will be available at https://github.com/Fly175/FDCFormer.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
ISSN: 0952-1976
Year: 2025
Volume: 142
7 . 5 0 0
JCR@2023
CAS Journal Grade:1
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0