Query:
学者姓名:陈炜玲
Refining:
Year
Type
Indexed by
Source
Complex
Former Name
Co-
Language
Clean All
Abstract :
Sonar images are vital in ocean explorations but face transmission challenges due to limited bandwidth and unstable channels. The Just Noticeable Difference (JND) represents the minimum distortion detectable by human observers. By eliminating perceptual redundancy, JND offers a solution for efficient compression and accurate Image Quality Assessment (IQA) to enable reliable transmission. However, existing JND models prove inadequate for sonar images due to their unique redundancy distributions and the absence of pixel-level annotated data. To bridge these gaps, we propose the first sonar-specific, picture-level JND dataset and a weakly supervised JND model that infers pixel-level JND from picture-level annotations. Our approach starts with pretraining a perceptually lossy/lossless predictor, which collaborates with sonar image properties to drive an unsupervised generator producing Critically Distorted Images (CDIs). These CDIs maximize pixel differences while preserving perceptual fidelity, enabling precise JND map derivation. Furthermore, we systematically investigate JND-guided optimization for sonar image compression and IQA algorithms, demonstrating favorable performance enhancements. © 1991-2012 IEEE.
Keyword :
Just Noticeable Difference (JND) Just Noticeable Difference (JND) Sonar Image Sonar Image Underwater Acoustic Transmission Underwater Acoustic Transmission Weakly Supervision Weakly Supervision
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, W. , Lin, W. , Feng, Q. et al. Pixel-Level Just Noticeable Difference in Sonar Images: Modeling and Applications [J]. | IEEE Transactions on Circuits and Systems for Video Technology , 2025 . |
MLA | Chen, W. et al. "Pixel-Level Just Noticeable Difference in Sonar Images: Modeling and Applications" . | IEEE Transactions on Circuits and Systems for Video Technology (2025) . |
APA | Chen, W. , Lin, W. , Feng, Q. , Zhang, R. , Zhao, T. . Pixel-Level Just Noticeable Difference in Sonar Images: Modeling and Applications . | IEEE Transactions on Circuits and Systems for Video Technology , 2025 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
In recent decades, the emergence of image applications has greatly facilitated the development of vision-based tasks. As a result, image quality assessment (IQA) has become increasingly significant for monitoring, controlling, and improving visual signal quality. While existing IQA methods focus on image fidelity and aesthetics to characterize perceived quality, it is important to evaluate the utility-centered quality of an image for popular tasks, such as object detection. However, research shows that there is a low correlation between utilities and perceptions. To address this issue, this article proposes a utility-centered IQA approach. Specifically, our research focuses on underwater fish detection as a challenging task in an underwater environment. Based on this task, we have developed a utility-centered underwater image quality database (UIQD) and a transfer learning-based advanced underwater quality by utility assessment (AQUA). Inspired by the top-down design approach used in fidelity-oriented IQA methods, we utilize deep models of object detection and transfer their features to the mission of utility-centered quality evaluation. Experimental results validate that the proposed AQUA achieves promising performance not only in fish detection but also in other tasks such as face recognition. We believe that our research provides valuable insights to bridge the gap between IQA research and visual tasks.
Keyword :
Convolutional neural networks Convolutional neural networks Databases Databases Feature extraction Feature extraction Image color analysis Image color analysis Image quality Image quality Image quality assessment (IQA) Image quality assessment (IQA) Neck Neck Quality assessment Quality assessment Training Training Transformers Transformers underwater images underwater images utility-centered IQA utility-centered IQA YOLO YOLO
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Weiling , Liao, Honggang , Lin, Rongfu et al. Utility-Centered Underwater Image Quality Evaluation [J]. | IEEE JOURNAL OF OCEANIC ENGINEERING , 2025 , 50 (2) : 743-757 . |
MLA | Chen, Weiling et al. "Utility-Centered Underwater Image Quality Evaluation" . | IEEE JOURNAL OF OCEANIC ENGINEERING 50 . 2 (2025) : 743-757 . |
APA | Chen, Weiling , Liao, Honggang , Lin, Rongfu , Zhao, Tiesong , Gu, Ke , Le Callet, Patrick . Utility-Centered Underwater Image Quality Evaluation . | IEEE JOURNAL OF OCEANIC ENGINEERING , 2025 , 50 (2) , 743-757 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Video compression artifact removal focuses on enhancing the visual quality of compressed videos by mitigating visual distortions. However, existing methods often struggle to effectively capture spatio-temporal features and recover high-frequency details, due to their suboptimal adaptation to the characteristics of compression artifacts. To overcome these limitations, we propose a novel Spatio-Temporal and Frequency Fusion (STFF) framework. STFF incorporates three key components: Feature Extraction and Alignment (FEA), which employs SRU for effective spatiotemporal feature extraction; Bidirectional High-Frequency Enhanced Propagation (BHFEP), which integrates HCAB to restore high-frequency details through bidirectional propagation; and Residual High-Frequency Refinement (RHFR), which further enhances high-frequency information. Extensive experiments demonstrate that STFF achieves superior performance compared to state-of-the-art methods in both objective metrics and subjective visual quality, effectively addressing the challenges posed by video compression artifacts. Trained model available: https://github.com/Stars-WMX/STFF.
Keyword :
Degradation Degradation Feature extraction Feature extraction Image coding Image coding Image restoration Image restoration Motion compensation Motion compensation Optical flow Optical flow Quality assessment Quality assessment Spatiotemporal phenomena Spatiotemporal phenomena Transformers Transformers video coding video coding Video compression Video compression Video compression artifact removal Video compression artifact removal video enhancement video enhancement video quality video quality
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Mingxing , Liao, Yipeng , Chen, Weiling et al. STFF: Spatio-Temporal and Frequency Fusion for Video Compression Artifact Removal [J]. | IEEE TRANSACTIONS ON BROADCASTING , 2025 , 71 (2) : 542-554 . |
MLA | Wang, Mingxing et al. "STFF: Spatio-Temporal and Frequency Fusion for Video Compression Artifact Removal" . | IEEE TRANSACTIONS ON BROADCASTING 71 . 2 (2025) : 542-554 . |
APA | Wang, Mingxing , Liao, Yipeng , Chen, Weiling , Lin, Liqun , Zhao, Tiesong . STFF: Spatio-Temporal and Frequency Fusion for Video Compression Artifact Removal . | IEEE TRANSACTIONS ON BROADCASTING , 2025 , 71 (2) , 542-554 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Unlike vanilla long-tailed recognition trains on imbalanced data but assumes a uniform test class distribution, test-agnostic long-tailed recognition aims to handle arbitrary test class distributions. Existing methods require prior knowledge of test sets for post-adjustment through multi-stage training, resulting in static decisions at the dataset-level. This pipeline overlooks instance diversity and is impractical in real situations. In this work, we introduce Prototype Alignment with Dedicated Experts (PADE), a one-stage framework for test-agnostic long-tailed recognition. PADE tackles unknown test distributions at the instance-level, without depending on test priors. It reformulates the task as a domain detection problem, dynamically adjusting the model for each instance. PADE comprises three main strategies: 1) parameter customization strategy for multi-experts skilled at different categories; 2) normalized target knowledge distillation for mutual guidance among experts while maintaining diversity; 3) re-balanced compactness learning with momentum prototypes, promoting instance alignment with the corresponding class centroid. We evaluate PADE on various long-tailed recognition benchmarks with diverse test distributions. The results verify its effectiveness in both vanilla and test-agnostic long-tailed recognition.
Keyword :
Long-tailed classification Long-tailed classification prototypical learning prototypical learning test-agnostic recognition test-agnostic recognition
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Guo, Chen , Chen, Weiling , Huang, Aiping et al. Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2025 , 27 : 455-465 . |
MLA | Guo, Chen et al. "Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition" . | IEEE TRANSACTIONS ON MULTIMEDIA 27 (2025) : 455-465 . |
APA | Guo, Chen , Chen, Weiling , Huang, Aiping , Zhao, Tiesong . Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition . | IEEE TRANSACTIONS ON MULTIMEDIA , 2025 , 27 , 455-465 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Sonar imaging system plays a crucial role in ocean exploration since it can overcome the limitations of light conditions. However, the challenge of low resolution remains in sonar images (SIs) due to sonar imaging characteristics and varying compression for low-bandwidth transmission. Most existing image super-resolution (SR) methods treat both the structure and texture in the same way, thus failing to simultaneously capture the rich global-local information. Nevertheless, both structure and texture are essential for the visual quality and applications of SIs. In this study, we propose a structure-texture dual-preserving network (STDPNet) tailored to capture both local texture details and global structure in a parallel manner for SISR. To further explore the internal correlation between structure and texture features, a feature interaction strategy is introduced. Moreover, conventional loss functions for SR often yield smooth results. We propose a hybrid loss function with spectral and local gradient-aware components to preserve frequency content and enhance texture detail. Experimental results validate the superior performance of the proposed STDPNet.
Keyword :
Convolutional neural networks Convolutional neural networks Feature extraction Feature extraction Generative adversarial networks Generative adversarial networks Global-local features interaction Global-local features interaction hybrid loss hybrid loss Image reconstruction Image reconstruction Imaging Imaging Silicon Silicon Sonar Sonar sonar image super-resolution (SISR) sonar image super-resolution (SISR) Superresolution Superresolution Transfer learning Transfer learning Transformers Transformers
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Mingjie , Chen, Weiling , Lan, Fengquan et al. Sonar Image Super-Resolution Based on Structure-Texture Dual Preservation [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
MLA | Wang, Mingjie et al. "Sonar Image Super-Resolution Based on Structure-Texture Dual Preservation" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) . |
APA | Wang, Mingjie , Chen, Weiling , Lan, Fengquan , Junejo, Naveed Ur Rehman , Zhao, Tiesong . Sonar Image Super-Resolution Based on Structure-Texture Dual Preservation . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The popularity of 360 degrees video is due to its realistic and immersive experience, but the higher resolution poses challenges for data transmission and storage. Existing compression schemes for 360 degrees videos mainly focus on spatial and temporal redundancy elimination, neglecting the removal of visual perception redundancy. To address this issue, we exploit the visual characteristics of 360 degrees equirectangular projection to extend the popular Just Noticeable Difference model to Spherical Just Noticeable Difference. Our modeling takes advantage of the following factors: regional masking factor, which employs an entropy-based region classification and separately characterizes contrast masking effects on different regions; latitude projection characteristics, which model the impact of pixel-level warping during equirectangular projection mapping; field of view attention factor, which reflects the attention variation of the human visual system on 360 degrees display. Subjective tests show that our Spherical Just Noticeable Difference model is consistent with user perceptions and also has a higher tolerance of distortions with reduced bit rates of 360 degrees pictures. Further experiments on Versatile Video Coding also demonstrate that the introduction of the proposed model significantly reduces bit rates with negligible loss in perceived visual quality.
Keyword :
Just Noticeable Difference(JND) Just Noticeable Difference(JND) Video coding Video coding Video quality assessment Video quality assessment Visual attention Visual attention
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Lin, Liqun , Wang, Yanting , Liu, Jiaqi et al. SJND: A Spherical Just Noticeable Difference Modelling for 360° video coding [J]. | SIGNAL PROCESSING-IMAGE COMMUNICATION , 2025 , 138 . |
MLA | Lin, Liqun et al. "SJND: A Spherical Just Noticeable Difference Modelling for 360° video coding" . | SIGNAL PROCESSING-IMAGE COMMUNICATION 138 (2025) . |
APA | Lin, Liqun , Wang, Yanting , Liu, Jiaqi , Wei, Hongan , Chen, Bo , Chen, Weiling et al. SJND: A Spherical Just Noticeable Difference Modelling for 360° video coding . | SIGNAL PROCESSING-IMAGE COMMUNICATION , 2025 , 138 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Sonar technology has been widely used in underwater surface mapping and remote object detection for its light-independent characteristics. Recently, the booming of artificial intelligence further surges sonar image (SI) processing and understanding techniques. However, the intricate marine environments and diverse nonlinear postprocessing operations may degrade the quality of SIs, impeding accurate interpretation of underwater information. Efficient image quality assessment (IQA) methods are crucial for quality monitoring in sonar imaging and processing. Existing IQA methods overlook the unique characteristics of SIs or focus solely on typical distortions in specific scenarios, which limits their generalization capability. In this article, we propose a unified sonar IQA method, which overcomes the challenges posed by diverse distortions. Though degradation conditions are changeable, ideal SIs consistently require certain properties that must be task-centered and exhibit attribute consistency. We derive a comprehensive set of quality attributes from both the task background and visual content of SIs. These attribute features are represented in just ten dimensions and ultimately mapped to the quality score. To validate the effectiveness of our method, we construct the first comprehensive SI dataset. Experimental results demonstrate the superior performance and robustness of the proposed method.
Keyword :
Attribute consistency Attribute consistency Degradation Degradation Distortion Distortion Image quality Image quality image quality assessment (IQA) image quality assessment (IQA) Imaging Imaging Noise Noise Nonlinear distortion Nonlinear distortion no-reference (NR) no-reference (NR) Quality assessment Quality assessment Silicon Silicon Sonar Sonar sonar imaging and processing sonar imaging and processing Sonar measurements Sonar measurements
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Cai, Boqin , Chen, Weiling , Zhang, Jianghe et al. Unified No-Reference Quality Assessment for Sonar Imaging and Processing [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
MLA | Cai, Boqin et al. "Unified No-Reference Quality Assessment for Sonar Imaging and Processing" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) . |
APA | Cai, Boqin , Chen, Weiling , Zhang, Jianghe , Junejo, Naveed Ur Rehman , Zhao, Tiesong . Unified No-Reference Quality Assessment for Sonar Imaging and Processing . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
由于网络环境的多变性,视频播放过程中容易出现卡顿、比特率波动等情况,严重影响了终端用户的体验质量. 为优化网络资源分配并提升用户观看体验,准确评估视频质量至关重要. 现有的视频质量评价方法主要针对短视频,普遍关注人眼视觉感知特性,较少考虑人类记忆特性对视觉信息的存储和表达能力,以及视觉感知和记忆特性之间的相互作用. 而用户观看长视频的时候,其质量评价需要动态评价,除了考虑感知要素外,还要引入记忆要素.为了更好地衡量长视频的质量评价,本文引入深度网络模型,深入探讨了视频感知和记忆特性对用户观看体验的影响,并基于两者特性提出长视频的动态质量评价模型. 首先,本文设计主观实验,探究在不同视频播放模式下,视觉感知特性和人类记忆特性对用户体验质量的影响,构建了基于用户感知和记忆的视频质量数据库(Video Quality Database with Perception and Memory,PAM-VQD);其次,基于 PAM-VQD 数据库,采用深度学习的方法,结合视觉注意力机制,提取视频的深层感知特征,以精准评估感知对用户体验质量的影响;最后,将前端网络输出的感知质量分数、播放状态以及自卡顿间隔作为三个特征输入长短期记忆网络,以建立视觉感知和记忆特性之间的时间依赖关系. 实验结果表明,所提出的质量评估模型在不同视频播放模式下均能准确预测用户体验质量,且泛化性能良好.
Keyword :
体验质量 体验质量 注意力机制 注意力机制 深度学习 深度学习 视觉感知特性 视觉感知特性 记忆效应 记忆效应
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林丽群 , 暨书逸 , 何嘉晨 et al. 基于感知和记忆的视频动态质量评价 [J]. | 电子学报 , 2024 . |
MLA | 林丽群 et al. "基于感知和记忆的视频动态质量评价" . | 电子学报 (2024) . |
APA | 林丽群 , 暨书逸 , 何嘉晨 , 赵铁松 , 陈炜玲 , 郭宗明 . 基于感知和记忆的视频动态质量评价 . | 电子学报 , 2024 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Due to the light-independent imaging characteristics, sonar images play a crucial role in fields such as underwater detection and rescue. However, the resolution of sonar images is negatively correlated with the imaging distance. To overcome this limitation, Super-Resolution (SR) techniques have been introduced into sonar image processing. Nevertheless, it is not always guaranteed that SR maintains the utility of the image. Therefore, quantifying the utility of SR reconstructed Sonar Images (SRSIs) can facilitate their optimization and usage. Existing Image Quality Assessment (IQA) methods are inadequate for evaluating SRSIs as they fail to consider both the unique characteristics of sonar images and reconstruction artifacts while meeting task requirements. In this paper, we propose a Perception-and-Cognition-inspired quality Assessment method for Sonar image Super-resolution (PCASS). Our approach incorporates a hierarchical feature fusion-based framework inspired by the cognitive process in the human brain to comprehensively evaluate SRSIs' quality under object recognition tasks. Additionally, we select features at each level considering visual perception characteristics introduced by SR reconstruction artifacts such as texture abundance, contour details, and semantic information to measure image quality accurately. Importantly, our method does not require training data and is suitable for scenarios with limited available images. Experimental results validate its superior performance.
Keyword :
hierarchical feature fusion hierarchical feature fusion image quality assessment (IQA) image quality assessment (IQA) Sonar image Sonar image super-resolution (SR) super-resolution (SR) task-oriented task-oriented
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Weiling , Cai, Boqin , Zheng, Sumei et al. Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 6398-6410 . |
MLA | Chen, Weiling et al. "Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 6398-6410 . |
APA | Chen, Weiling , Cai, Boqin , Zheng, Sumei , Zhao, Tiesong , Gu, Ke . Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 6398-6410 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Super-Resolution (SR) algorithms aim to enhance the resolutions of images. Massive deep-learning-based SR techniques have emerged in recent years. In such case, a visually appealing output may contain additional details compared with its reference image. Accordingly, fully referenced Image Quality Assessment (IQA) cannot work well; however, reference information remains essential for evaluating the qualities of SR images. This poses a challenge to SR-IQA: How to balance the referenced and no-reference scores for user perception? In this paper, we propose a Perception-driven Similarity-Clarity Tradeoff (PSCT) model for SR-IQA. Specifically, we investigate this problem from both referenced and no-reference perspectives, and design two deep-learning-based modules to obtain referenced and no-reference scores. We present a theoretical analysis based on Human Visual System (HVS) properties on their tradeoff and also calculate adaptive weights for them. Experimental results indicate that our PSCT model is superior to the state-of-the-arts on SR-IQA. In addition, the proposed PSCT model is also capable of evaluating quality scores in other image enhancement scenarios, such as deraining, dehazing and underwater image enhancement. The source code is available at https://github.com/kekezhang112/PSCT.
Keyword :
Adaptation models Adaptation models Distortion Distortion Feature extraction Feature extraction Image quality assessment Image quality assessment image super-resolution image super-resolution Measurement Measurement perception-driven perception-driven Quality assessment Quality assessment similarity-clarity tradeoff similarity-clarity tradeoff Superresolution Superresolution Task analysis Task analysis
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Keke , Zhao, Tiesong , Chen, Weiling et al. Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) : 5897-5907 . |
MLA | Zhang, Keke et al. "Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 7 (2024) : 5897-5907 . |
APA | Zhang, Keke , Zhao, Tiesong , Chen, Weiling , Niu, Yuzhen , Hu, Jinsong , Lin, Weisi . Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) , 5897-5907 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |