Indexed by:
Abstract:
Recent years have witnessed a fast evolution and promising performance of the vision transformer (ViT)-based place recognizer, which aims at building a general system. State-of-the-arts (SOTAs) can hardly carry on their superiority at low light so far, thereby considerably blocking the broadening of visual place recognition-related mobile robot applications. To perform robust visual place recognition in low-light scenes, this article proposes an end-to-end trainable dark-enhanced Net, which tries to alleviate the impact of poor illumination and environmental noise. Specifically, a lightweight dark enhancement module, i.e., sf ResEM, is firstly trained to efficiently improve image illumination quality by residual-based adversarial learning. A dual-level sampling pyramid transformer, i.e., sf DSPFormer, is then constructed to extract discriminative features through aggregating reconstructed descriptors. Moreover, to improve the performance and reliability of place recognition, a reranking method based on cross-entropy loss is used for final place matching. To provide a comprehensive evaluation, we also build two challenging place benchmarks, namely, sf SimPlace and sf DarkPlace. Evaluations of both the public benchmarks and the newly built benchmarks show that the task-inspired design enables the recognizer to achieve significant performance improvements in the nighttime for robot place recognition compared to other top-ranked place recognizers. © 2024 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Industrial Informatics
ISSN: 1551-3203
Year: 2025
Issue: 2
Volume: 21
Page: 1359-1368
1 1 . 7 0 0
JCR@2023
CAS Journal Grade:1
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: