Query:
学者姓名:吴靖
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
Objective Infrared and visible light images exhibit significant differences in spectral properties due to their distinct imaging mechanisms. These differences often result in a high mismatch rate of feature points between the two types of images. Currently, widely used mismatch rejection algorithms, such as random sample consensus (RANSAC) and its variants, typically employ a strategy of random sampling combined with iterative optimization modeling for consistency fitting. However, when aligning heterogeneous images with high outlier rates, these methods often struggle to balance alignment accuracy and speed, leading to a high number of iterations or weak robustness. To address the relatively fixed positions of infrared and visible detectors in dual-modal imaging systems, we propose a spatial constraints priority sampling consensus (SC-PRISAC) algorithm. This algorithm leverages image space constraints to provide a robust inlier screening mechanism and an efficient sampling strategy, thus offering stable and reliable support for the fusion of infrared and visible image information. Methods In this study, a bispectral calibration target with both infrared and visible features is designed based on differences in material radiance. We achieve high-precision binocular camera calibration by accurately determining the internal and external parameters of the camera using a bilateral filtering pyramid. Based on this calibration, the spatial relationship between heterogeneous images is constructed using the epipolar constraint theorem and the principle of depth consistency. By implementing a priority sampling strategy based on the matching quality ranking of feature points, the number of iterations required by the algorithm is significantly reduced, allowing for precise and efficient elimination of mismatched feature points. Results and Discussions Our method’s calibration accuracy is assessed through the mean reprojection error (MRE), with comparative results presented in Table 1 and Fig. 7. The findings demonstrate a 58.2% improvement in calibration precision over the spot detection calibration technique provided by OpenCV, reducing the calibration error to 0.430 pixels. In the outlier rejection experiment, the progression of feature point matching across stages is detailed in Table 2. Following the introduction of spatial constraints, all valid matches are retained, and 27 outlier pairs are discarded. An additional 10 outlier pairs are further eliminated through preferential sampling strategies. To comprehensively evaluate the algorithm’s performance, several comparative methods, including RANSAC, degenerate sample consensus (DEGENSAC), MAGSAC++, graph-cut RANSAC (GC-RANSAC), Bayesian network for adaptive sample consensus (BANSAC), and a neural network-based ∇-RANSAC, are employed, with evaluations based on inlier counts, homography estimation errors, accuracy, and computational runtime as shown in Table 3 and Fig. 12. The proposed algorithm achieves a notably low homography estimation error of 7.857 with a runtime of just 1.919 ms, outperforming all comparative methods. This superior performance is primarily due to the SC-PRISAC algorithm’s robust spatial constraint mechanism, which effectively filters out outliers that contradict imaging principles, enabling more accurate sampling and fitting. In addition, the robustness of the proposed method and competing algorithms under complex scenarios is investigated by varying the proportion of outliers in initial datasets, as illustrated in Fig. 13. All algorithms perform satisfactorily when outlier ratios are below 45%. However, as the outlier ratio escalates, the precision of traditional methods like RANSAC deteriorates significantly. Remarkably, even at an extreme outlier ratio of 95%, SC-PRISAC maintains an accuracy rate of 70.2%, whereas other algorithms’accuracies drop to between 12% and 49%. These results highlight the significant advantage of the proposed method in scenarios with high mismatch rates, demonstrating its superior applicability and effectiveness in aligning infrared and visible light images under challenging conditions. Conclusions To address the challenge of high mismatch rates in infrared and visible image alignment, we propose an algorithm for rejecting mismatched feature points based on optimizing camera spatial relations. By designing a bispectral calibration target and improving the circular centroid positioning algorithm, sub-pixel-level infrared and visible binocular camera calibration is achieved, with the calibration error controlled within 0.430 pixel, significantly enhancing camera calibration accuracy. The algorithm integrates spatial constraints based on epipolar geometry and depth consistency to accurately exclude mismatched features that violate physical imaging laws and reduces computational complexity through an intelligent sampling strategy that prioritizes high-quality feature points. Experimental results show that the proposed method achieves a homography estimation error of 7.857, a processing speed of 1.919 ms, and maintains excellent performance even under high outlier ratios, outperforming other mismatched feature point rejection algorithms and proving its superior generalization and reliability in addressing infrared and visible image alignment problems. © 2024 Chinese Optical Society. All rights reserved.
Keyword :
Binoculars Binoculars Cameras Cameras Image enhancement Image enhancement Image matching Image matching Image registration Image registration Image sampling Image sampling Infrared detectors Infrared detectors Stereo image processing Stereo image processing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Lin, Ye , Chen, Haitao et al. Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints [J]. | Acta Optica Sinica , 2024 , 44 (20) . |
MLA | Shen, Ying et al. "Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints" . | Acta Optica Sinica 44 . 20 (2024) . |
APA | Shen, Ying , Lin, Ye , Chen, Haitao , Wu, Jing , Huang, Feng . Algorithm for Eliminating Mismatched Feature Points in Heterogeneous Images Pairs Under Spatial Constraints . | Acta Optica Sinica , 2024 , 44 (20) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Infrared images hold significant value in applications such as remote sensing and fire safety. However, infrared detectors often face the problem of high hardware costs, which limits their widespread use. Advancements in deep learning have spurred innovative approaches to image super-resolution (SR), but comparatively few efforts have been dedicated to the exploration of infrared images. To address this, we design the Residual Swin Transformer and Average Pooling Block (RSTAB) and propose the SwinAIR, which can effectively extract and fuse the diverse frequency features in infrared images and achieve superior SR reconstruction performance. By further integrating SwinAIR with U-Net, we propose the SwinAIR-GAN for real infrared image SR reconstruction. SwinAIR-GAN extends the degradation space to better simulate the degradation process of real infrared images. Additionally, it incorporates spectral normalization, dropout, and artifact discrimination loss to reduce the potential image artifacts. Qualitative and quantitative evaluations on various datasets confirm the effectiveness of our proposed method in reconstructing realistic textures and details of infrared images. © 2024 by the authors.
Keyword :
generative adversarial network generative adversarial network image super-resolution image super-resolution infrared image infrared image transformer transformer
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Huang, F. , Li, Y. , Ye, X. et al. Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net [J]. | Sensors , 2024 , 24 (14) . |
MLA | Huang, F. et al. "Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net" . | Sensors 24 . 14 (2024) . |
APA | Huang, F. , Li, Y. , Ye, X. , Wu, J. . Infrared Image Super-Resolution Network Utilizing the Enhanced Transformer and U-Net . | Sensors , 2024 , 24 (14) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
红外与可见光图像因其显著的光谱特性差异,在配准过程中易出现特征点误匹配率高的问题.当前广泛应用的误匹配剔除算法通常采用随机采样结合模型拟合的策略,这类方法往往难以兼顾配准精度和速度,表现为算法迭代次数过高或鲁棒性不强.针对这一问题,提出一种基于空间约束的优先采样一致性(SC-PRISAC)误匹配剔除算法.利用材料辐射率差异设计兼具红外与可见光特征的双光谱标定靶标,基于双边滤波金字塔标定获取相机内外参数,在此基础上利用极线约束定理和深度一致性原则构建异源图像间的空间约束关系.使用高质量特征点优先采样策略减少了算法的迭代次数,有效剔除误匹配特征点.实验表明:所提算法实现了亚像素红外与可见光双目标定,标定误差降低至0.430 pixel;在提高配准精度的同时,也有效提升了处理速度,单应性矩阵估计误差为7.857,处理时间仅为1.919 ms,各项性能均优于RANSAC(random sample consensus)等算法.所提算法为红外与可见光图像配准提供一种更为可靠和高效的误匹配剔除解决方案.
Keyword :
双目标定 双目标定 图像配准 图像配准 极线约束 极线约束 误匹配特征点剔除 误匹配特征点剔除
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 沈英 , 林烨 , 陈海涛 et al. 空间约束下异源图像误匹配特征点剔除算法 [J]. | 光学学报 , 2024 , 44 (20) : 208-219 . |
MLA | 沈英 et al. "空间约束下异源图像误匹配特征点剔除算法" . | 光学学报 44 . 20 (2024) : 208-219 . |
APA | 沈英 , 林烨 , 陈海涛 , 吴靖 , 黄峰 . 空间约束下异源图像误匹配特征点剔除算法 . | 光学学报 , 2024 , 44 (20) , 208-219 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Object detection in remote sensing images has become a crucial component of computer vision. It has been employed in multiple domains, including military surveillance, maritime rescue, and military operations. However, the high density of small objects in remote sensing images makes it challenging for existing networks to accurately distinguish objects from shallow image features. These factors contribute to many object detection networks that produce missed detections and false alarms, particularly for densely arranged objects and small objects. To address the above problems, this paper proposes a feature enhancement feedforward network (FEFN), based on a lightweight channel feedforward module (LCFM) and a feature enhancement module (FEM). First, the FEFN captures shallow spatial information in images through a lightweight channel feedforward module that can extract the edge information of small objects such as ships. Next, it enhances the feature interaction and representation by utilizing a feature enhancement module that can achieve more accurate detection results for densely arranged objects and small objects. Finally, comparative experiments on two publicly challenging remote sensing datasets demonstrate the effectiveness of the proposed method. © 2024 by the authors.
Keyword :
Feature extraction Feature extraction Image enhancement Image enhancement Military photography Military photography Object detection Object detection Object recognition Object recognition Remote sensing Remote sensing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Jing , Ni, Rixiang , Chen, Zhenhua et al. FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images [J]. | Remote Sensing , 2024 , 16 (13) . |
MLA | Wu, Jing et al. "FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images" . | Remote Sensing 16 . 13 (2024) . |
APA | Wu, Jing , Ni, Rixiang , Chen, Zhenhua , Huang, Feng , Chen, Liqiong . FEFN: Feature Enhancement Feedforward Network for Lightweight Object Detection in Remote Sensing Images . | Remote Sensing , 2024 , 16 (13) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
With the unavailability of scene depth information, single -sensor dehazing methods based on deep learning or prior information do not effectively work in dense foggy scenes. An effective approach is to remove the dense fog by fusing visible and near -infrared images. However, the current dehazing algorithms based on near -infrared and visible images experience color distortion and information loss. To overcome these challenges, we proposed a color -preserving dehazing method that fuses near -infrared and visible images by introducing a dataset (VN-Haze) of visible and near -infrared images captured under hazy conditions. A twostage image enhancement (TSE) method that can effectively rectify the color of visible images affected by fog was proposed to prevent the introduction of distorted color information. Furthermore, we proposed an adaptive luminance mapping (ALM) method to prevent color bias in fusion images caused by excessive differences in brightness between visible and near -infrared images that occur in vegetation areas. The proposed visiblepriority fusion strategy reasonably allocates weights for visible and near -infrared images, minimizing the loss of important features in visible images. Compared with existing dehazing algorithms, the proposed algorithm generates images with natural colors and less distortion and retains important visible information. Moreover, it demonstrates remarkable performance in objective evaluations.
Keyword :
Color preserving Color preserving Dense fog Dense fog Image dehazing Image dehazing Image fusion Image fusion Near-infrared Near-infrared
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Jing , Wei, Peng , Huang, Feng . Color-preserving visible and near-infrared image fusion for removing fog [J]. | INFRARED PHYSICS & TECHNOLOGY , 2024 , 138 . |
MLA | Wu, Jing et al. "Color-preserving visible and near-infrared image fusion for removing fog" . | INFRARED PHYSICS & TECHNOLOGY 138 (2024) . |
APA | Wu, Jing , Wei, Peng , Huang, Feng . Color-preserving visible and near-infrared image fusion for removing fog . | INFRARED PHYSICS & TECHNOLOGY , 2024 , 138 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The accurate measurement of tiny pressure variations in low-pressure environments requires pressure-sensitive paint (PSP) with high pressure sensitivity and pressure-sensitivity constancy for aerodynamics testing. In this study, polymer-ceramic pressure-sensitive paints (PC-PSPs) were developed using platinum(II) meso-tetra (pentafluorophenyl) porphyrin (PtTFPP) and palladium(II) meso-tetra(pentafluorophenyl) porphyrin (PdTFPP) as luminophores; titanium dioxide (TiO2) and mesoporous silica (mSiO2) as particles; and poly[1-(trimethylsilyl)1-propyne] (PTMSP), poly(isobutyl-methacrylate) (PIBM), ethylene-vinyl acetate copolymer (EVA), and polyacrylate (B1000) as polymers. The static characteristics were calibrated and evaluated under low-pressure conditions. The results showed that PC-PSPs using PtTFPP with a short-lifetime and high oxygen-permeable polymer led to high pressure sensitivity. By contrast, if PdTFPP with a long-lifetime was used, the pressure sensitivity was dominated by the luminescence lifetime rather than the oxygen permeability of the polymer. PdTFPP/mSiO2-PTMSP exhibited a remarkable pressure sensitivity of 78.46%/kPa. PC-PSPs employing PIBM, EVA, and B1000, respectively, exhibited excellent pressure sensitivity constancy above 96% after near-vacuum storage for one hour. However, PTMSP-based PC-PSPs exhibited poor pressure-sensitivity constancy owing to the aging of the polymer in vacuum, which can be improved by employing the long-lifetime PdTFPP luminophore and mSiO2 particles. Furthermore, PC-PSPs using mSiO2 particles exhibited lower signal levels, higher pressure sensitivities, lower temperature dependencies and photodegradation than those using TiO2 particles. PdTFPP/ mSiO2-PIBM and PdTFPP/mSiO2-B1000 exhibited application potential under low-pressure conditions owing to their favorable static characteristics.
Keyword :
High pressure sensitivity High pressure sensitivity Low pressure Low pressure Pressure -sensitive paint Pressure -sensitive paint Pressure -sensitivity constancy Pressure -sensitivity constancy
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Jing , Huang, Zanqiang , Kong, Di et al. Polymer-ceramic pressure-sensitive paint with high pressure sensitivity and pressure-sensitivity constancy in low-pressure environments [J]. | SENSORS AND ACTUATORS A-PHYSICAL , 2023 , 366 . |
MLA | Wu, Jing et al. "Polymer-ceramic pressure-sensitive paint with high pressure sensitivity and pressure-sensitivity constancy in low-pressure environments" . | SENSORS AND ACTUATORS A-PHYSICAL 366 (2023) . |
APA | Wu, Jing , Huang, Zanqiang , Kong, Di , Huang, Feng . Polymer-ceramic pressure-sensitive paint with high pressure sensitivity and pressure-sensitivity constancy in low-pressure environments . | SENSORS AND ACTUATORS A-PHYSICAL , 2023 , 366 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
To improve the recovery ability of polarization dehazing algorithms in fog scenes,a polarization image dehazing algorithm based on polarization optimization and atmospheric light correction is proposed. First,according to the brightness distribution of the fog scene,the fog image was decomposed into bright residuals and dark residuals via guided filtering. Second,to optimize the degree of polarization,the degrees of polarization corresponding to the bright and dark residuals were increased and decreased,respectively. This optimized degree of polarization can blur the atmospheric light image. The difference value of the degree of polarization in the residuals was used to correct the atmospheric light for ensuring its intensity range met the atmospheric degradation model. Experiments indicated that the contrast ratio was 3. 07 times that in original hazy images after dehazing and that the entropy and standard deviation of dehazed images were increased by 9. 21% and 61. 86%,respectively. In environments with different concentrations of simulated fog,the proposed algorithm achieved excellent SSIM,CIEDE2000,and PSNR values. Compared with the state-of-art dehazing algorithms,the effect of the proposed algorithm was obvious,and it recovered the scene details efficiently. © 2023 Chinese Academy of Sciences. All rights reserved.
Keyword :
blurry atmospheric light image blurry atmospheric light image correctness of atmosphere light correctness of atmosphere light degree of polarization optimization degree of polarization optimization guided filter residuals guided filter residuals image dehazing image dehazing
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, J. , Song, W. , Guo, C. et al. Image dehazing based on polarization optimization and atmosphere light correction; [基 于 偏 振 度 优 化 与 大 气 光 校 正 的 图 像 去 雾] [J]. | Optics and Precision Engineering , 2023 , 31 (12) : 1827-1840 . |
MLA | Wu, J. et al. "Image dehazing based on polarization optimization and atmosphere light correction; [基 于 偏 振 度 优 化 与 大 气 光 校 正 的 图 像 去 雾]" . | Optics and Precision Engineering 31 . 12 (2023) : 1827-1840 . |
APA | Wu, J. , Song, W. , Guo, C. , Ye, X. , Huang, F. . Image dehazing based on polarization optimization and atmosphere light correction; [基 于 偏 振 度 优 化 与 大 气 光 校 正 的 图 像 去 雾] . | Optics and Precision Engineering , 2023 , 31 (12) , 1827-1840 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为提高偏振去雾算法对雾气场景的恢复能力,提出一种偏振度优化与大气光校正的偏振图像去雾算法。首先,依据雾气场景亮度分布,使用导向滤波将雾气图像分解为亮面残差和暗面残差;其次,扩大亮面残差对应的偏振度值,削减暗面残差对应的偏振度值以优化偏振度,该偏振度可将大气光图像模糊;最后,利用偏振度在亮面和暗面残差上的差异,对大气光强度进行校正,以使其随雾气的变化规律满足大气退化模型。实验结果表明:本文算法的去雾图像相较原雾气图像,对比度提高3.07倍、信息熵提高9.21%、标准差提高61.86%。且在不同浓度模拟雾气环境中,本文算法都有较为优异的SSIM、PSNR和CIEDE2000。相较于现有先进图像去雾算法,本文算法去雾效果明显,可以有效地复原雾气中场景的细节信息。
Keyword :
偏振度优化 偏振度优化 图像去雾 图像去雾 图像强度校正 图像强度校正 大气光图像模糊 大气光图像模糊 导向滤波残差 导向滤波残差
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 吴靖 , 宋文杰 , 郭翠霞 et al. 基于偏振度优化与大气光校正的图像去雾 [J]. | 光学精密工程 , 2023 , 31 (12) : 1827-1840 . |
MLA | 吴靖 et al. "基于偏振度优化与大气光校正的图像去雾" . | 光学精密工程 31 . 12 (2023) : 1827-1840 . |
APA | 吴靖 , 宋文杰 , 郭翠霞 , 叶晓晶 , 黄峰 . 基于偏振度优化与大气光校正的图像去雾 . | 光学精密工程 , 2023 , 31 (12) , 1827-1840 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Deployment of deep convolutional neural networks (CNNs) in single image super-resolution (SISR) for edge computing devices is mainly hampered by the huge computational cost. In this work, we propose a lightweight image super-resolution (SR) network based on a reparameterizable multibranch bottleneck module (RMBM). In the training phase, RMBM efficiently extracts high-frequency information by utilizing multibranch structures, including bottleneck residual block (BRB), inverted bottleneck residual block (IBRB), and expand-squeeze convolution block (ESB). In the inference phase, the multibranch structures can be combined into a single 3 x 3 convolution to reduce the number of parameters without incurring any additional computational cost. Furthermore, a novel peak-structure-edge (PSE) loss is proposed to resolve the problem of oversmoothed reconstructed images while significantly improving image structure similarity. Finally, we optimize and deploy the algorithm on the edge devices equipped with the rockchip neural processor unit (RKNPU) to achieve real-time SR reconstruction. Extensive experiments on natural image datasets and remote sensing image datasets show that our network outperforms advanced lightweight SR networks regarding objective evaluation metrics and subjective vision quality. The reconstruction results demonstrate that the proposed network can achieve higher SR performance with a 98.1 K model size, which can be effectively deployed to edge computing devices.
Keyword :
edge computing device edge computing device lightweight image super-resolution lightweight image super-resolution PSE loss PSE loss reparameterizable multibranch bottleneck module reparameterizable multibranch bottleneck module
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Shen, Ying , Zheng, Weihuang , Huang, Feng et al. Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution [J]. | SENSORS , 2023 , 23 (8) . |
MLA | Shen, Ying et al. "Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution" . | SENSORS 23 . 8 (2023) . |
APA | Shen, Ying , Zheng, Weihuang , Huang, Feng , Wu, Jing , Chen, Liqiong . Reparameterizable Multibranch Bottleneck Network for Lightweight Image Super-Resolution . | SENSORS , 2023 , 23 (8) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
本发明涉及一种双通道成像气体定量检测方法,包括以下步骤:步骤S1:根据待测气体吸收光谱图,构建两个气体成像通道, 分别为信号通道和参考通道;步骤S2 : 根据信号通道和参考通道分别获取的通道图像,计算双通道光强比,标定双通道光强比与目标气体柱浓度之间的关系;步骤S3 : 在现场检测时,获取待测气体的双通道目标气体图像,基于双通道光强比,获取其空间浓度的二维分布。本发明只需采集双通道目标气体图像便可计算出其二维浓度分布,解决了现有成像检测技术必须重建背景图像信息使计算结果产生误差的固有问题。
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 吴靖 , 张朋朋 , 黄峰 . 一种双通道成像气体定量检测方法 : CN202111218722.0[P]. | 2021-10-20 00:00:00 . |
MLA | 吴靖 et al. "一种双通道成像气体定量检测方法" : CN202111218722.0. | 2021-10-20 00:00:00 . |
APA | 吴靖 , 张朋朋 , 黄峰 . 一种双通道成像气体定量检测方法 : CN202111218722.0. | 2021-10-20 00:00:00 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |