Query:
学者姓名:吴衔誉
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
The current advancements in image processing have led to significant progress in polarization defogging methods. However, most existing approaches are not suitable for scenes with targets exhibiting a high degree of polarization (DOP), as they rely on the assumption that the detected polarization information solely originates from the airlight. In this paper, a dual -polarization defogging method connecting frequency division and blind separation of polarization information is proposed. To extract the polarization component of direct transmission light from the detected polarized signal, blind separation of overlapped polarized information is performed in the low -frequency domain based on visual perception. Subsequently, after estimating airlight, a high -quality defogging image can be restored. Extensive experiments conducted on real -world scenes and comparative tests confirm the superior performance of our proposed method compared to other competitive methods, particularly in reconstructing objects with high DOP. This work provides a quantitative approach for estimating the contributions of polarization light from different sources and further expands the application range of polarimetric defogging imaging. (c) 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Feng , Ke, Chaozhen , Wu, Xianyu et al. Dual-polarization defogging method based on frequency division and blind separation of polarization information [J]. | OPTICS EXPRESS , 2024 , 32 (5) . |
MLA | Huang, Feng et al. "Dual-polarization defogging method based on frequency division and blind separation of polarization information" . | OPTICS EXPRESS 32 . 5 (2024) . |
APA | Huang, Feng , Ke, Chaozhen , Wu, Xianyu , Liu, Yu . Dual-polarization defogging method based on frequency division and blind separation of polarization information . | OPTICS EXPRESS , 2024 , 32 (5) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
This paper introduces a camera-array-based super -resolution color polarization imaging system designed to simultaneously capture color and polarization information of a scene in a single shot. Existing snapshot color polarization imaging has a complex structure and limited generalizability, which are overcome by the proposed system. In addition, a novel reconstruction algorithm is designed to exploit the complementarity and correlation between the twelve channels in acquired color polarization images for simultaneous super -resolution (SR) imaging and denoising. We propose a confidence-guided SR reconstruction algorithm based on guided filtering to enhance the constraint capability of the observed data. Additionally, by introducing adaptive parameters, we effectively balance the data fidelity constraint and the regularization constraint of nonlocal sparse tensor. Simulations were conducted to compare the proposed system with a color polarization camera. The results show that color polarization images generated by the proposed system and algorithm outperform those obtained from the color polarization camera and the state -of -the -art color polarization demosaicking algorithms. Moreover, the proposed algorithm also outperforms state -of -the -art SR algorithms based on deep learning. To evaluate the applicability of the proposed imaging system and reconstruction algorithm in practice, a prototype was constructed for color polarization image acquisition. Compared with conventional acquisition, the proposed solution demonstrates a significant improvement in the reconstructed color polarization images.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Feng , Chen, Yating , Wang, Xuesong et al. Joint constraints of guided filtering based confidence and nonlocal sparse tensor for color polarization super-resolution imaging [J]. | OPTICS EXPRESS , 2024 , 32 (2) : 2364-2391 . |
MLA | Huang, Feng et al. "Joint constraints of guided filtering based confidence and nonlocal sparse tensor for color polarization super-resolution imaging" . | OPTICS EXPRESS 32 . 2 (2024) : 2364-2391 . |
APA | Huang, Feng , Chen, Yating , Wang, Xuesong , Wang, Shu , Wu, Xianyu . Joint constraints of guided filtering based confidence and nonlocal sparse tensor for color polarization super-resolution imaging . | OPTICS EXPRESS , 2024 , 32 (2) , 2364-2391 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Due to the complexity of real optical flow capture, the existing research still has not performed real optical flow capture of infrared (IR) images with the production of an optical flow based on IR images, which makes the research and application of deep learning-based optical flow computation limited to the field of RGB images only. Therefore, in this paper, we propose a method to produce an optical flow dataset of IR images. We utilize the RGB-IR cross-modal image transformation network to rationally transform existing RGB image optical flow datasets. The RGB-IR cross-modal image transformation is based on the improved Pix2Pix implementation, and in the experiments, the network is validated and evaluated using the RGB-IR aligned bimodal dataset M3FD. Then, RGB-IR cross-modal transformation is performed on the existing RGB optical flow dataset KITTI, and the optical flow computation network is trained using the IR images generated by the transformation. Finally, the computational results of the optical flow computation network before and after training are analyzed based on the RGB-IR aligned bimodal data.
Keyword :
deep neural network deep neural network infrared image infrared image optical flow optical flow
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Feng , Huang, Wei , Wu, Xianyu . Enhancing Infrared Optical Flow Network Computation through RGB-IR Cross-Modal Image Generation [J]. | SENSORS , 2024 , 24 (5) . |
MLA | Huang, Feng et al. "Enhancing Infrared Optical Flow Network Computation through RGB-IR Cross-Modal Image Generation" . | SENSORS 24 . 5 (2024) . |
APA | Huang, Feng , Huang, Wei , Wu, Xianyu . Enhancing Infrared Optical Flow Network Computation through RGB-IR Cross-Modal Image Generation . | SENSORS , 2024 , 24 (5) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Most of the state-of-the-art defogging models presented in the literature assume that the attenuation coefficient of all spectral channels is constant, which inevitably leads to spectral distortion and information bias. To address this issue, this paper proposes a defogging method that takes into account the difference between the extinction coefficients of multispectral channels of light traveling through fog. Then the spatially distributed transmission map of each spectral channel is reconstructed to restore the fog-degraded images. The experimental results of various realistic complex scenes show that the proposed method has more outstanding advantages in restoring lost detail, compensating for degraded spectral information, and recognizing more targets hidden in uniform ground fog than state-of-the-art technologies. In addition, this work provides a method to characterize the intrinsic property of fog expressed as multispectral relative extinction coefficients, which act as a fundament for further reconstruction of multispectral information. (c) 2024 Optica Publishing Group
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Feng , Ke, Chaozhen , Wu, Xianyu et al. Multispectral image defogging based on a wavelength-dependent extinction coefficient model in fog [J]. | JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION , 2024 , 41 (4) : 631-642 . |
MLA | Huang, Feng et al. "Multispectral image defogging based on a wavelength-dependent extinction coefficient model in fog" . | JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION 41 . 4 (2024) : 631-642 . |
APA | Huang, Feng , Ke, Chaozhen , Wu, Xianyu , Guo, Cuixia , Liu, Yu . Multispectral image defogging based on a wavelength-dependent extinction coefficient model in fog . | JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION , 2024 , 41 (4) , 631-642 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The performance of the emerging infrared polarization remote sensing systems is limited by the use of infrared polarization imaging sensors and cannot produce high-resolution (HR) infrared polarization images. The lack of HR infrared polarization imaging sensors and systems hinders the development and application of infrared polarization imaging technology. The existing infrared image super-resolution (SR) methods fail to improve the resolution of infrared polarization images (IRPIs) while preserving the infrared polarization information inherent in the IRPIs; thus, aiming at obtaining accurate HR infrared polarization images, this study proposed a deep-learning-based SR method, SwinIPISR, to improve infrared polarization image resolution and preserve the infrared polarization information of the target or scene. The performance of the proposed SwinIPISR was verified and compared with existing SR methods. In contrast to other methods, SwinIPISR not only improves image resolution but also retains polarization information of the scene and objects in the polarization image. Further, the impact of the network depth of SwinIPISR on the SR performance was evaluated through experiments. The experimental results confirmed the effectiveness of the proposed SwinIPISR in enhancing the image resolution and visual effects of infrared polarization images without compromising the polarization information.
Keyword :
Deep learning Deep learning Feature extraction Feature extraction Image reconstruction Image reconstruction Image resolution Image resolution Imaging Imaging infrared polarization image infrared polarization image infrared polarization sensors infrared polarization sensors Sensors Sensors super-resolution (SR) reconstruction super-resolution (SR) reconstruction Training Training Transformers Transformers
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Xianyu , Zhou, Bin , Wang, Xuesong et al. SwinIPISR: A Super-Resolution Method for Infrared Polarization Imaging Sensors via Swin Transformer [J]. | IEEE SENSORS JOURNAL , 2024 , 24 (1) : 468-477 . |
MLA | Wu, Xianyu et al. "SwinIPISR: A Super-Resolution Method for Infrared Polarization Imaging Sensors via Swin Transformer" . | IEEE SENSORS JOURNAL 24 . 1 (2024) : 468-477 . |
APA | Wu, Xianyu , Zhou, Bin , Wang, Xuesong , Peng, Jian , Lin, Peng , Cao, Rongjin et al. SwinIPISR: A Super-Resolution Method for Infrared Polarization Imaging Sensors via Swin Transformer . | IEEE SENSORS JOURNAL , 2024 , 24 (1) , 468-477 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Most existing super -resolution (SR) imaging systems, inspired by the bionic compound eye, utilize image registration and reconstruction algorithms to overcome the angular resolution limitations of individual imaging systems. This article introduces a multi -aperture multi -focal -length imaging system and a multi -focal -length image super -resolution algorithm, mimicking the foveal imaging of the human eye. Experimental results demonstrate that with the proposed imaging system and an SR imaging algorithm inspired by the human visual system, the proposed method can enhance the spatial resolution of the foveal region by up to 4 x compared to the original acquired image. These findings validate the effectiveness of the proposed imaging system and computational imaging algorithm in enhancing image texture and spatial resolution.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Feng , Wang, Xuesong , Chen, Yating et al. Bio-inspired foveal super-resolution method for multi-focal-length images based on local gradient constraints [J]. | OPTICS EXPRESS , 2024 , 32 (11) : 19333-19351 . |
MLA | Huang, Feng et al. "Bio-inspired foveal super-resolution method for multi-focal-length images based on local gradient constraints" . | OPTICS EXPRESS 32 . 11 (2024) : 19333-19351 . |
APA | Huang, Feng , Wang, Xuesong , Chen, Yating , Wu, Xianyu . Bio-inspired foveal super-resolution method for multi-focal-length images based on local gradient constraints . | OPTICS EXPRESS , 2024 , 32 (11) , 19333-19351 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Infrared polarization (IRP) division-of-focal-plane (DoFP) imaging technology has gained attention, but limited resolution due to sensor size hinders its development. High-resolution visible light (VIS) images are easily obtained, making it valuable to use VIS images to enhance IRP super-resolution (SR). However, IRP DoFP SR is more challenging than infrared SR due to the need for accurate polarization reconstruction. Therefore, this paper proposes an effective multi-modal SR network, integrating high-resolution VIS image constraints for IRP DoFP image reconstruction, and incorporating polarization information as a component of the loss function to achieve end-to-end IRP SR. For the multi-modal IRP SR, a benchmark dataset was created, which includes 1559 pairs of registered images. Experiments on this dataset demonstrate that the proposed method effectively utilizes VIS images to restore polarization information in IRP images, achieving a 4x magnification. Results show superior quantitative and visual evaluations compared to other methods. © 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement.
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, X. , Chen, Y. , Peng, J. et al. LVTSR: learning visible image texture network for infrared polarization super-resolution imaging [J]. | Optics Express , 2024 , 32 (17) : 29078-29098 . |
MLA | Wang, X. et al. "LVTSR: learning visible image texture network for infrared polarization super-resolution imaging" . | Optics Express 32 . 17 (2024) : 29078-29098 . |
APA | Wang, X. , Chen, Y. , Peng, J. , Chen, J. , Huang, F. , Wu, X. . LVTSR: learning visible image texture network for infrared polarization super-resolution imaging . | Optics Express , 2024 , 32 (17) , 29078-29098 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The fusion of multi-modal images to create an image that preserves the unique features of each modality as well as the features shared across modalities is a challenging task, particularly in the context of infrared (IR)-visible image fusion. In addition, the presence of polarization and IR radiation information in images obtained from IR polarization sensors further complicates the multi-modal image-fusion process. This study proposes a fusion network designed to overcome the challenges associated with the integration of low-resolution IR, IR polarization, and high-resolution visible (VIS) images. By introducing cross attention modules and a multi-stage fusion approach, the network can effectively extract and fuse features from different modalities, fully expressing the diversity of the images. This network learns end-to-end mapping from sourced to fused images using a loss function, eliminating the need for ground-truth images for fusion. Experimental results using public datasets and remote-sensing field-test data demonstrate that the proposed methodology achieves commendable results in qualitative and quantitative evaluations, with gradient based fusion performance QAB/F, mutual information (MI), and QCB values higher than the second-best values by 0.20, 0.94, and 0.04, respectively. This study provides a comprehensive representation of target scene information that results in enhanced image quality and improved object identification capabilities. In addition, outdoor and VIS image datasets are produced, providing a data foundation and reference for future research in related fields. © 2024 Elsevier B.V.
Keyword :
Image fusion Image fusion Infrared (IR) polarization Infrared (IR) polarization IR polarization-visible image fusion IR polarization-visible image fusion Unsupervised learning Unsupervised learning
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, X. , Zhou, B. , Peng, J. et al. Enhancing three-source cross-modality image fusion with improved DenseNet for infrared polarization and visible light images [J]. | Infrared Physics and Technology , 2024 , 141 . |
MLA | Wang, X. et al. "Enhancing three-source cross-modality image fusion with improved DenseNet for infrared polarization and visible light images" . | Infrared Physics and Technology 141 (2024) . |
APA | Wang, X. , Zhou, B. , Peng, J. , Huang, F. , Wu, X. . Enhancing three-source cross-modality image fusion with improved DenseNet for infrared polarization and visible light images . | Infrared Physics and Technology , 2024 , 141 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Although multispectral and hyperspectral imaging acquisitions are applied in numerous fields, the existing spectral imaging systems suffer from either low temporal or spatial resolution. In this study, a new multispectral imaging system-camera array based multispectral super resolution imaging system (CAMSRIS) is proposed that can simultaneously achieve multispectral imaging with high temporal and spatial resolutions. The proposed registration algorithm is used to align pairs of different peripheral and central view images. A novel, super-resolution, spectral-clustering-based image reconstruction algorithm was developed for the proposed CAMSRIS to improve the spatial resolution of the acquired images and preserve the exact spectral information without introducing false information. The reconstructed results showed that the spatial and spectral quality and operational efficiency of the proposed system are better than those of a multispectral filter array (MSFA) based on different multispectral datasets. The PSNR of the multispectral super-resolution images obtained by the proposed method were respectively higher by 2.03 and 1.93 dB than those of GAP-TV and DeSCI, and the execution time was significantly shortened by approximately 54.55 s and 9820.19 s when the CAMSI dataset was used. The feasibility of the proposed system was verified in practical applications based on different scenes captured by the self-built system.
Keyword :
adaptive kernel adaptive kernel computational imaging computational imaging hierarchical clustering hierarchical clustering multiple local-geometric transformations multiple local-geometric transformations Snapshot multispectral camera array Snapshot multispectral camera array spectral clustering super-resolution spectral clustering super-resolution
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, Feng , Chen, Yating , Wang, Xuesong et al. Spectral Clustering Super-Resolution Imaging Based on Multispectral Camera Array [J]. | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2023 , 32 : 1257-1271 . |
MLA | Huang, Feng et al. "Spectral Clustering Super-Resolution Imaging Based on Multispectral Camera Array" . | IEEE TRANSACTIONS ON IMAGE PROCESSING 32 (2023) : 1257-1271 . |
APA | Huang, Feng , Chen, Yating , Wang, Xuesong , Wang, Shu , Wu, Xianyu . Spectral Clustering Super-Resolution Imaging Based on Multispectral Camera Array . | IEEE TRANSACTIONS ON IMAGE PROCESSING , 2023 , 32 , 1257-1271 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In the process of unmanned aerial vehicle (UAV) visual-navigation-algorithm design and accuracy verification, the question of how to develop a high-precision and high-reliability semi-physical simulation platform has become a significant engineering problem. In this study, a new UAV semi-physical-simulation-platform architecture is proposed, which includes a six-degree-of-freedom mechanical structure, a real-time control system and real-time animation-simulation software. The mechanical structure can realistically simulate the flight attitude of a UAV in a three-dimensional space of 4 x 2 x 1.4 m. Based on the designed mechanical structure and its dynamics, the control system and the UAV real-time flight-animation simulation were designed. Compared with the conventional simulation system, this system enables real-time flight-attitude simulation in a real physical environment and simultaneous flight-attitude simulation in virtual-animation space. The test results show that the repeated positioning accuracy of the three-axis rotary table reaches 0.006 degrees, the repeated positioning accuracy of the three-axis translation table reaches 0.033 mm, and the dynamic-positioning accuracy reaches 0.04 degrees and 0.4 mm, which meets the simulation requirements of high-precision visual UAV navigation.
Keyword :
optical flow optical flow semi-physical simulation platform semi-physical simulation platform six degrees of freedom six degrees of freedom unmanned aerial vehicle unmanned aerial vehicle visual navigation visual navigation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Zhonglin , Wang, Weixiong , Li, Yufeng et al. Design and Experimental Study of a Novel Semi-Physical Unmanned-Aerial-Vehicle Simulation Platform for Optical-Flow-Based Navigation [J]. | AEROSPACE , 2023 , 10 (2) . |
MLA | Lin, Zhonglin et al. "Design and Experimental Study of a Novel Semi-Physical Unmanned-Aerial-Vehicle Simulation Platform for Optical-Flow-Based Navigation" . | AEROSPACE 10 . 2 (2023) . |
APA | Lin, Zhonglin , Wang, Weixiong , Li, Yufeng , Zhang, Xinglong , Zhang, Tianhong , Wang, Haitao et al. Design and Experimental Study of a Novel Semi-Physical Unmanned-Aerial-Vehicle Simulation Platform for Optical-Flow-Based Navigation . | AEROSPACE , 2023 , 10 (2) . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |