Query:
学者姓名:吴丽君
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
为解决生成对抗网络训练过程中因损失简单加权导致的图像感知质量下降问题,提出损失自适应调整的生成对抗超分辨率网络(LA-GAN).首先,该方法设计通过计算角点分布的相关强度大小,区分规则纹理区域与不规则纹理区域.其次,基于不同区域,设计了区域自适应生成对抗学习框架.在该框架中,网络只在不规则纹理区域中进行对抗学习,提高感知质量.此外,基于下采样图像和图像块相似性的重组图像取代训练集中的高分辨率图像,实现平均绝对损失在不规则纹理区域弱约束网络,在规则纹理区域强约束网络,保证图像信号保真度.最后,通过实验证明经过优化的网络在信号保真度和感知质量方面皆有提升.
Keyword :
区域自适应 区域自适应 损失函数 损失函数 生成对抗网络 生成对抗网络 超分辨率 超分辨率
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林旭锋 , 吴丽君 , 陈志聪 et al. 损失自适应的高感知质量生成对抗超分辨率网络 [J]. | 福州大学学报(自然科学版) , 2025 , 53 (1) : 26-34 . |
MLA | 林旭锋 et al. "损失自适应的高感知质量生成对抗超分辨率网络" . | 福州大学学报(自然科学版) 53 . 1 (2025) : 26-34 . |
APA | 林旭锋 , 吴丽君 , 陈志聪 , 林培杰 , 程树英 . 损失自适应的高感知质量生成对抗超分辨率网络 . | 福州大学学报(自然科学版) , 2025 , 53 (1) , 26-34 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recently, promising progresses have been made in photovoltaic (PV) arrays fault diagnosis (FD) due to the importance of operation and maintenance of PV power plants. However, PV arrays inevitably experience gradual degradation due to the complexity of operating conditions, resulting in domain shift of output data, which has a significant negative impact on the performance of FD. To address these problems, this study proposes a two-stage cross-domain, i.e., adaptive generative adversarial network deep learning approach for PV arrays FD under different degradation levels. In the first stage, the Normal data from the source domain (PV arrays without performance degradation) is utilized for training. Then, the Maximum Mean Discrepancy (MMD) loss is introduced to the fault generators in adversarial training to produce high-level feature representations of source domain fault data. In the second stage, identical training steps are used to guide the fault generators. Specifically, Normal data from the target domain i.e., PV arrays with performance degradation, is utilized to generate fault data features that are consistent with the target domain features. Then, the cross-domain adaptive FD model can be trained by using generated fault data features. The proposed model can not only learn the relationship from the different types of data, but also utilize target domain PV array data under healthy conditions to manually generate fake samples for cross-domain adaptive FD. Experimental results show that the Precision of the proposed model in the two tasks is 98.34% and 92.93 %, with Recall is 98.23 % and 94.13%, F1-Score is 0.9823 and 0.9274, all of which are better than those of the comparison models.
Keyword :
Adversarial networks Adversarial networks Domain adaptation Domain adaptation Fault diagnosis Fault diagnosis Generative models Generative models Photovoltaic arrays Photovoltaic arrays
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Peijie , Guo, Feng , Lin, Yaohai et al. Fault diagnosis of photovoltaic arrays with different degradation levels based on cross-domain adaptive generative adversarial network [J]. | APPLIED ENERGY , 2025 , 386 . |
MLA | Lin, Peijie et al. "Fault diagnosis of photovoltaic arrays with different degradation levels based on cross-domain adaptive generative adversarial network" . | APPLIED ENERGY 386 (2025) . |
APA | Lin, Peijie , Guo, Feng , Lin, Yaohai , Cheng, Shuying , Lu, Xiaoyang , Chen, Zhicong et al. Fault diagnosis of photovoltaic arrays with different degradation levels based on cross-domain adaptive generative adversarial network . | APPLIED ENERGY , 2025 , 386 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In the field of image super-resolution (SR), deep learning-based models have achieved remarkable success. However, these models often face compatibility issues with low-power devices due to their computational and memory constraints. To address this challenge, numerous lightweight and efficient models have been proposed. While these models typically employ smaller convolutional kernels and shallower architectures to reduce parameter counts and computational complexity, they often neglect the importance of capturing global receptive fields. In this paper, we propose a simple yet effective deep network, termed the dilated-convolutional feature modulation network (DCFMN), to tackle these limitations. Specifically, we introduce a dilated separable modulation unit (DSMU) to aggregate spatial information from diverse large receptive fields. To complement the DSMU, which processes features from a long-range perspective, we further design a local feature enhancement module (LFEM) to extract local contextual information for effective channel fusion. Additionally, by leveraging reparameterization techniques, we ensure that the model incurs no additional computational overhead during inference. Extensive experimental results demonstrate that our DCFMN achieves competitive performance among existing efficient SR methods, while maintaining a compact model size and low computational complexity.
Keyword :
Deep learning Deep learning Image super-resolution Image super-resolution Lightweight Lightweight Reparameterization Reparameterization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Li, Shan , Chen, Zhicong . Dilated-convolutional feature modulation network for efficient image super-resolution [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) . |
MLA | Wu, Lijun et al. "Dilated-convolutional feature modulation network for efficient image super-resolution" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 22 . 2 (2025) . |
APA | Wu, Lijun , Li, Shan , Chen, Zhicong . Dilated-convolutional feature modulation network for efficient image super-resolution . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2025 , 22 (2) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Defect detection plays a crucial role in ensuring the safety and longevity of structures, with defect region classification particularly beneficial for focusing efforts on potential defect areas. Traditional deep convolutional neural networks (DCNNs) based defect classification networks still have a high number of parameters and computational demands, making them unsuitable for embedded systems. This paper proposes the Adaptive Prior Activation-Based Binary Information Enhancement Network (AOIE-Net), which significantly reduces computational requirements by binarizing weights and activations. Specifically designed for steel defect detection, AOIE-Net optimizes the binary quantization process and enhances feature representation to improve the performance of BNNs in steel defect detection tasks. AOIE-Net introduces a Dual Batch Normalization-based Information Enhancement Block (DBN-IEB) and an Adaptive Binary Activation Independent Optimization (ABA-IO) method to reduce computational complexity while boosting classification accuracy. Experimental results demonstrate that AOIE-Net outperforms state-of-the-art binary neural network models on CIFAR-10, ImageNet, and the NEU-CLS steel defect dataset, achieving classification accuracy of 90.6%, 72.1%, and 99.4%, respectively. The proposed method offers an efficient, low-complexity solution for real-time defect classification in large-scale structural inspections and holds significant potential for practical applications.
Keyword :
binary neural network binary neural network deep learning deep learning enhanced binary information enhanced binary information image classification image classification steels defect steels defect
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Chen, Qingqi , Su, Jingxuan et al. Binary information enhancement network for efficient steel defects detection and classification [J]. | SMART STRUCTURES AND SYSTEMS , 2025 , 35 (3) : 153-162 . |
MLA | Wu, Lijun et al. "Binary information enhancement network for efficient steel defects detection and classification" . | SMART STRUCTURES AND SYSTEMS 35 . 3 (2025) : 153-162 . |
APA | Wu, Lijun , Chen, Qingqi , Su, Jingxuan , Chen, Zhicong , Cheng, Shuying . Binary information enhancement network for efficient steel defects detection and classification . | SMART STRUCTURES AND SYSTEMS , 2025 , 35 (3) , 153-162 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Reliable identification of gunshot events is crucial for reducing gun violence and enhancing public safety. However, current gunshot detection and recognition methods are still affected by complex shooting scenarios, various nongunshot events, diverse firearm types, and scarce gunshot datasets. To address these issues, based on triaxial acceleration of guns, a novel general deep transfer learning approach is proposed for gunshot detection and recognition, which combines a temporal deep learning model with transfer learning and automated machine learning (AutoML) to improve the accuracy, reliability and generalization performance. First, a new gunshot recognition model named as MobileNetTime is proposed for the two-class gunshot event detection, three-class coarse firearm recognition, and 15-class fine firearm recognition, which utilizes 1-D convolution and inverted residual modules to autonomously extract higher-level features from the time series acceleration data. Second, considering the impact of nongunshot events, the AutoML is employed for model fine tuning, to transfer the pretrained MobileNetTime from the handgun to various firearm types. In addition, we propose a low-power versatile gunshot recognition system framework employing a triaxial accelerometer for both of wrist-worn and gun-embedded scenarios, which adopts a two-stage wake-up mechanism that selectively monitors gunshot events using temporal and spectral energy features. The experimental results on the two gunshot datasets DGUWA and GRD show that the proposed model can achieve up to 100% accuracy on the DGUWA dataset and 98.98% accuracy on the GRD dataset for the two-class gunshot detection. Moreover, the proposed deep transfer learning approach achieves a 98.98% accuracy for 16-class firearm classification, which is 6.21% higher than the model without transfer learning.
Keyword :
Accelerometers Accelerometers Accuracy Accuracy Adaptation models Adaptation models Automated machine learning (AutoML) Automated machine learning (AutoML) Data models Data models deep transfer learning deep transfer learning Feature extraction Feature extraction gunshot detection and recognition gunshot detection and recognition Internet of Things Internet of Things Monitoring Monitoring Real-time systems Real-time systems Training Training Transfer learning Transfer learning tri-axial acceleration tri-axial acceleration
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Zhicong , Zheng, Haoxin , Wu, Lijun et al. Deep-Transfer-Learning-Based Intelligent Gunshot Detection and Firearm Recognition Using Tri-Axial Acceleration [J]. | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (5) : 5891-5900 . |
MLA | Chen, Zhicong et al. "Deep-Transfer-Learning-Based Intelligent Gunshot Detection and Firearm Recognition Using Tri-Axial Acceleration" . | IEEE INTERNET OF THINGS JOURNAL 12 . 5 (2025) : 5891-5900 . |
APA | Chen, Zhicong , Zheng, Haoxin , Wu, Lijun , Huang, Jingchang , Yang, Yang . Deep-Transfer-Learning-Based Intelligent Gunshot Detection and Firearm Recognition Using Tri-Axial Acceleration . | IEEE INTERNET OF THINGS JOURNAL , 2025 , 12 (5) , 5891-5900 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
图像超分辨率任务常用双三次下采样以构造数据集训练网络,但双三次下采样由于退化模型固定,导致网络泛化能力低,无法用于真实世界低分辨率图像.为解决上述问题本文提出预处理模块,通过预处理模块与双三次下采样数据集得到的网络相结合,在减少资源消耗的同时提高其泛化能力.此外,还针对不同的精度需求设计了特征学习训练策略和多任务联调策略.通过根据不同需求采用相应的训练策略,在满足精度需求的同时具有消耗计算资源少、训练速度快以及适用范围广的特点.实验证明,增加预处理模块的网络以较少的模型参数增加量换取了重建效果和感知质量方面的较大提升,并且通过不同策略实现了进一步的精度提高.
Keyword :
多任务学习 多任务学习 计算机视觉 计算机视觉 超分辨率 超分辨率 预处理模块 预处理模块
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林旭锋 , 吴丽君 . 简化退化模型的真实图像超分辨率网络 [J]. | 网络安全与数据治理 , 2024 , 43 (3) : 34-39 . |
MLA | 林旭锋 et al. "简化退化模型的真实图像超分辨率网络" . | 网络安全与数据治理 43 . 3 (2024) : 34-39 . |
APA | 林旭锋 , 吴丽君 . 简化退化模型的真实图像超分辨率网络 . | 网络安全与数据治理 , 2024 , 43 (3) , 34-39 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Video super-resolution is capable of recovering high-resolution images from multiple low-resolution images, where loop structures are a common frame choice for video super-resolution tasks. BasicVSR employs bidirectional propagation and feature alignment to efficiently utilize information from the entire input video. In this work, we improved the performance of the network by revisiting the role of the various modules in BasicVSR and redesigning the network. Firstly, we will maintain centralized communication with the reference frame through the reference-based feature enrichment module after optical flow distortion, which is helpful for handling complex motion, and at the same time, for the selected keyframe, according to the degree of motion deviation of the adjacent frame relative to the keyframe, it is divided into two different regions, and the model with different receptive fields is adopted for feature extraction to further alleviate the accumulation of alignment errors. In the feature correction module, we modify the simple residual block stack to RIR structure, and fuse different levels of features with each other, which can make the final feature information more comprehensive and abundant. In addition, dense connection are introduced in the reconstruction module to promote the full use of hierarchical feature information for better reconstruction. Experimental verification is carried out on two public datasets: Vid4 and REDS4, and the comparative results show that compared with BasicVSR, the PSNR quantitative indexes of the proposed improved model on the two datasets are improved by 0.27dB and 0.33dB, respectively. In addition, from the point of view of visual perception, the model can effectively improve the clarity of the image and reduce artifacts.
Keyword :
Bidirectional propagation Bidirectional propagation Densely connected residual Densely connected residual Feature enrichment module Feature enrichment module Time difference Time difference Video super-resolution Video super-resolution
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Ma, Yong , Chen, Zhicong . Dense video super-resolution time-differential network with feature enrichment module [J]. | SIGNAL IMAGE AND VIDEO PROCESSING , 2024 , 18 (11) : 7887-7897 . |
MLA | Wu, Lijun et al. "Dense video super-resolution time-differential network with feature enrichment module" . | SIGNAL IMAGE AND VIDEO PROCESSING 18 . 11 (2024) : 7887-7897 . |
APA | Wu, Lijun , Ma, Yong , Chen, Zhicong . Dense video super-resolution time-differential network with feature enrichment module . | SIGNAL IMAGE AND VIDEO PROCESSING , 2024 , 18 (11) , 7887-7897 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
To address the problem of incomplete segmentation of large objects and miss-segmentation of tiny objects that is universally existing in semantic segmentation algorithms, PACAMNet, a real-time segmentation network based on short-term dense concatenate of parallel atrous convolution and fusion of attentional features is proposed, called PACAMNet. First, parallel atrous convolution is introduced to improve the short-term dense concatenate module. By adjusting the size of the atrous factor, multi-scale semantic information is obtained to ensure that the last layer of the module can also obtain rich input feature maps. Second, attention feature fusion module is proposed to align the receptive fields of deep and shallow feature maps via depth-separable convolutions with different sizes, and the channel attention mechanism is used to generate weights to effectively fuse the deep and shallow feature maps. Finally, experiments are carried out based on both Cityscapes and CamVid datasets, and the segmentation accuracy achieve 77.4% and 74.0% at the inference speeds of 98.7 FPS and 134.6 FPS, respectively. Compared with other methods, PACAMNet improves the inference speed of the model while ensuring higher segmentation accuracy, so PACAMNet achieve a better balance between segmentation accuracy and inference speed.
Keyword :
Atrous convolution Atrous convolution Attention mechanism Attention mechanism Feature fusion Feature fusion Real-time semantic segmentation Real-time semantic segmentation
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wu, Lijun , Qiu, Shangdong , Chen, Zhicong . Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion [J]. | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (3) . |
MLA | Wu, Lijun et al. "Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion" . | JOURNAL OF REAL-TIME IMAGE PROCESSING 21 . 3 (2024) . |
APA | Wu, Lijun , Qiu, Shangdong , Chen, Zhicong . Real-time semantic segmentation network based on parallel atrous convolution for short-term dense concatenate and attention feature fusion . | JOURNAL OF REAL-TIME IMAGE PROCESSING , 2024 , 21 (3) . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
行人重识别是跨摄像头追踪的关键环节之一,主流方法多采用ImageNet进行预训练,忽视了数据集的域间差异,且以结构庞大的多分支模型居多,模型复杂度较高.本文设计一种行人重识别方法,采用基于原始视频带噪声标签参与监督的方式进行预训练,减少域间差异以提升特征表达能力;以基于注意力的特征融合方式取代残差网络的跳接映射,增强网络的特征提取能力;在网络中嵌入坐标注意力机制,在低复杂度的情况下强化关键特征,抑制低贡献特征;采用随机擦除对输入数据做数据增强以提高泛化能力,联合分类损失、三元组损失和中心损失函数对网络进行监督训练.在公开数据集Market-1501和Duke-MTMC上完成了消融实验,与主流方法对比实验表明本方法在不需要复杂多分支逻辑结构的前提下,仍可达到较高的精度.
Keyword :
残差网络 残差网络 注意力机制 注意力机制 特征融合 特征融合 行人重识别 行人重识别 预训练 预训练
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 南灏 , 吴丽君 . 基于视频预训练和注意力特征融合的行人重识别方法 [J]. | 智能计算机与应用 , 2024 , 14 (1) : 95-101 . |
MLA | 南灏 et al. "基于视频预训练和注意力特征融合的行人重识别方法" . | 智能计算机与应用 14 . 1 (2024) : 95-101 . |
APA | 南灏 , 吴丽君 . 基于视频预训练和注意力特征融合的行人重识别方法 . | 智能计算机与应用 , 2024 , 14 (1) , 95-101 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
为了减少电力信号传输时的数据量,本文提出了一种融合压缩感知和LZW编码的电力数据压缩算法,在保证重构精度不变的条件下对电力数据进行进一步的压缩,提高了整体的压缩率.首先,对压缩感知的各种观测矩阵进行仿真分析;其次,选择使用稀疏随机矩阵作为本文的观测矩阵,提出了一种能够快速完成压缩感知计算的硬件实现方法,并完成了硬件的设计和验证.实验表明,本设计在FPGA器件上的工作频率最高可达200 MHz;整个数据压缩过程的总延时约为16.11μs;在重构误差约为4.83%时,数据压缩率约为36.83%,比仅使用压缩感知提升了约13.17%.
Keyword :
LZW编码 LZW编码 压缩感知 压缩感知 电力信号 电力信号
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 谢宇杰 , 陈志聪 , 吴丽君 . 融合压缩感知和LZW编码的电力数据压缩算法 [J]. | 智能计算机与应用 , 2024 , 14 (1) : 124-129 . |
MLA | 谢宇杰 et al. "融合压缩感知和LZW编码的电力数据压缩算法" . | 智能计算机与应用 14 . 1 (2024) : 124-129 . |
APA | 谢宇杰 , 陈志聪 , 吴丽君 . 融合压缩感知和LZW编码的电力数据压缩算法 . | 智能计算机与应用 , 2024 , 14 (1) , 124-129 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |