Query:
学者姓名:陈炜玲
Refining:
Year
Type
Indexed by
Source
Complex
Co-
Language
Clean All
Abstract :
由于网络环境的多变性,视频播放过程中容易出现卡顿、比特率波动等情况,严重影响了终端用户的体验质量. 为优化网络资源分配并提升用户观看体验,准确评估视频质量至关重要. 现有的视频质量评价方法主要针对短视频,普遍关注人眼视觉感知特性,较少考虑人类记忆特性对视觉信息的存储和表达能力,以及视觉感知和记忆特性之间的相互作用. 而用户观看长视频的时候,其质量评价需要动态评价,除了考虑感知要素外,还要引入记忆要素.为了更好地衡量长视频的质量评价,本文引入深度网络模型,深入探讨了视频感知和记忆特性对用户观看体验的影响,并基于两者特性提出长视频的动态质量评价模型. 首先,本文设计主观实验,探究在不同视频播放模式下,视觉感知特性和人类记忆特性对用户体验质量的影响,构建了基于用户感知和记忆的视频质量数据库(Video Quality Database with Perception and Memory,PAM-VQD);其次,基于 PAM-VQD 数据库,采用深度学习的方法,结合视觉注意力机制,提取视频的深层感知特征,以精准评估感知对用户体验质量的影响;最后,将前端网络输出的感知质量分数、播放状态以及自卡顿间隔作为三个特征输入长短期记忆网络,以建立视觉感知和记忆特性之间的时间依赖关系. 实验结果表明,所提出的质量评估模型在不同视频播放模式下均能准确预测用户体验质量,且泛化性能良好.
Keyword :
体验质量 体验质量 注意力机制 注意力机制 深度学习 深度学习 视觉感知特性 视觉感知特性 记忆效应 记忆效应
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | 林丽群 , 暨书逸 , 何嘉晨 et al. 基于感知和记忆的视频动态质量评价 [J]. | 电子学报 , 2024 . |
MLA | 林丽群 et al. "基于感知和记忆的视频动态质量评价" . | 电子学报 (2024) . |
APA | 林丽群 , 暨书逸 , 何嘉晨 , 赵铁松 , 陈炜玲 , 郭宗明 . 基于感知和记忆的视频动态质量评价 . | 电子学报 , 2024 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Face Super-Resolution (FSR) plays a crucial role in enhancing low-resolution face images, which is essential for various face-related tasks. However, FSR may alter individuals’ identities or introduce artifacts that affect recognizability. This problem has not been well assessed by existing Image Quality Assessment (IQA) methods. In this paper, we present both subjective and objective evaluations for FSR-IQA, resulting in a benchmark dataset and a reduced reference quality metrics, respectively. First, we incorporate a novel criterion of identity preservation and recognizability to develop our Face Super-resolution Quality Dataset (FSQD). Second, we analyze the correlation between identity preservation and recognizability, and investigate effective feature extractions for both of them. Third, we propose a training-free IQA framework called Face Identity and Recognizability Evaluation of Super-resolution (FIRES). Experimental results using FSQD demonstrate that FIRES achieves competitive performance. IEEE
Keyword :
Biometrics Biometrics Face recognition Face recognition face super-resolution face super-resolution Feature extraction Feature extraction identity preservation identity preservation Image quality Image quality Image recognition Image recognition Image reconstruction Image reconstruction Measurement Measurement quality assessment quality assessment recognizability recognizability Superresolution Superresolution
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, W. , Lin, W. , Xu, X. et al. Face Super-Resolution Quality Assessment Based On Identity and Recognizability [J]. | IEEE Transactions on Biometrics, Behavior, and Identity Science , 2024 , 6 (3) : 1-1 . |
MLA | Chen, W. et al. "Face Super-Resolution Quality Assessment Based On Identity and Recognizability" . | IEEE Transactions on Biometrics, Behavior, and Identity Science 6 . 3 (2024) : 1-1 . |
APA | Chen, W. , Lin, W. , Xu, X. , Lin, L. , Zhao, T. . Face Super-Resolution Quality Assessment Based On Identity and Recognizability . | IEEE Transactions on Biometrics, Behavior, and Identity Science , 2024 , 6 (3) , 1-1 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Underwater images serve as crucial mediums for conveying marine information. Nevertheless, due to the inherent complexity of the underwater environment, underwater images often suffer from various quality degradation phenomena such as color deviation, low contrast, and non-uniform illumination. These degraded underwater images fail to meet the requirements of underwater computer vision applications. Consequently, effective quality optimization of underwater images is of paramount research and analytical value. Based on whether they rely on underwater physical imaging models, underwater image quality optimization techniques can be categorized into underwater image enhancement and underwater image restoration methods. This paper provides a comprehensive review of underwater image enhancement and restoration algorithms, accompanied by a brief introduction to underwater imaging model. Then, we systematically analyze publicly available underwater image datasets and commonly-used quality assessment methodologies. Furthermore, extensive experimental comparisons are carried out to assess the performance of underwater image optimization algorithms and their practical impact on high-level vision tasks. Finally, the challenges and future development trends in this field are discussed. We hope that the efforts made in this paper will provide valuable references for future research and contribute to the innovative advancement of underwater image optimization.
Keyword :
Image quality assessment Image quality assessment Underwater image datasets Underwater image datasets Underwater image enhancement Underwater image enhancement Underwater image restoration Underwater image restoration
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Wang, Mingjie , Zhang, Keke , Wei, Hongan et al. Underwater image quality optimization: Researches, challenges, and future trends [J]. | IMAGE AND VISION COMPUTING , 2024 , 146 . |
MLA | Wang, Mingjie et al. "Underwater image quality optimization: Researches, challenges, and future trends" . | IMAGE AND VISION COMPUTING 146 (2024) . |
APA | Wang, Mingjie , Zhang, Keke , Wei, Hongan , Chen, Weiling , Zhao, Tiesong . Underwater image quality optimization: Researches, challenges, and future trends . | IMAGE AND VISION COMPUTING , 2024 , 146 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Smart oceanic exploration has greatly benefitted from AI-driven underwater image and video processing. However, the volume of underwater video content is subject to narrow-band and time-varying underwater acoustic channels. How to support high-utility video transmission at such a limited capacity is still an open issue. In this article, we propose a Flexible Underwater Video Codec (FUVC) with separate designs for targets-of-interest regions and backgrounds. The encoder locates all targets of interest, compresses their corresponding regions with x.265, and, if bandwidth allows, compresses the background with a lower bitrate. The decoder reconstructs both streams, identifies clean targets of interest, and fuses them with the background via a mask detection and background recovery (MDBR) network. When the background stream is unavailable, the decoder adapts all targets of interest to a virtual background via Poisson blending. Experimental results show that FUVC outperforms other codecs with a lower bitrate at the same quality. It also supports a flexible codec for underwater acoustic channels. The database and the source code are available at https://github.com/z21110008/FUVC.
Keyword :
Ocean exploration Ocean exploration smart oceans smart oceans underwater image processing underwater image processing video coding video coding video compression video compression
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zheng, Yannan , Luo, Jiawei , Chen, Weiling et al. FUVC: A Flexible Codec for Underwater Video Transmission [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 : 18-18 . |
MLA | Zheng, Yannan et al. "FUVC: A Flexible Codec for Underwater Video Transmission" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) : 18-18 . |
APA | Zheng, Yannan , Luo, Jiawei , Chen, Weiling , Li, Zuoyong , Zhao, Tiesong . FUVC: A Flexible Codec for Underwater Video Transmission . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 , 18-18 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The Just Noticeable Difference (JND) model aims to identify perceptual redundancies in images by simulating the perception of the Human Visual System (HVS). Exploring the JND of sonar images is important for the study of their visual properties and related applications. However, there is still room for improvement in performance of existing JND models designed for Natural Scene Images (NSIs), and the characteristics of sonar images are not sufficiently considered by them. On the other hand, there are significant challenges in constructing a densely labeled pixel-level JND dataset. To tackle these issues, we proposed a pixel-level JND model based on inexact supervised learning. A perceptually lossy/lossless predictor was first pre-trained on a coarsegrained picture-level JND dataset. This predictor can guide the unsupervised generator to produce an image that is perceptually lossless compared to the original image. Then we designed a loss function to ensure that the generated image is perceptually lossless and maximally different from the original image. Experimental results show that our model outperforms current models.
Keyword :
Inexact Supervised Learning Inexact Supervised Learning Just Noticeable Difference (JND) Just Noticeable Difference (JND) Sonar Images Sonar Images
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Feng, Qianxue , Wang, Mingjie , Chen, Weiling et al. Pixel-Level Sonar Image JND Based on Inexact Supervised Learning [J]. | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI , 2024 , 14435 : 469-481 . |
MLA | Feng, Qianxue et al. "Pixel-Level Sonar Image JND Based on Inexact Supervised Learning" . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI 14435 (2024) : 469-481 . |
APA | Feng, Qianxue , Wang, Mingjie , Chen, Weiling , Zhao, Tiesong , Zhu, Yi . Pixel-Level Sonar Image JND Based on Inexact Supervised Learning . | PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI , 2024 , 14435 , 469-481 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Recently, many compression algorithms are applied to decrease the cost of video storage and transmission. This will introduce undesirable artifacts, which severely degrade visual quality. Therefore, Video Compression Artifacts Removal (VCAR) aims at reconstructing a high-quality video from its corrupted version of compression. Generally, this task is considered as a vision-related instead of media-related problem. In vision-related research, the visual quality has been significantly improved while the computational complexity and bitrate issues are less considered. In this work, we review the performance constraints of video coding and transfer to evaluate the VCAR outputs. Based on the analyses, we propose a Spatial-Temporal Attention-Guided Enhancement Network (STAGE-Net). First, we employ dynamic filter processing, instead of conventional optical flow method, to reduce the computational cost of VCAR. Second, we introduce self-attention mechanism to design Sequential Residual Attention Blocks (SRABs) to improve visual quality of enhanced video frames with bitrate constraints. Both quantitative and qualitative experimental results have demonstrated the superiority of our proposed method, which achieves high visual qualities and low computational costs.
Keyword :
Bit rate Bit rate Computational complexity Computational complexity Image coding Image coding Task analysis Task analysis Video coding Video coding Video compression Video compression video compression artifacts removal (VCAR) video compression artifacts removal (VCAR) video enhancement video enhancement video quality video quality Visualization Visualization
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Jiang, Nanfeng , Chen, Weiling , Lin, Jielian et al. Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 5657-5669 . |
MLA | Jiang, Nanfeng et al. "Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 5657-5669 . |
APA | Jiang, Nanfeng , Chen, Weiling , Lin, Jielian , Zhao, Tiesong , Lin, Chia-Wen . Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 5657-5669 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
The limited bandwidth of underwater acoustic channels poses a challenge to the efficiency of multimedia information transmission. To improve efficiency, the system aims to transmit less data while maintaining image utility at the receiving end. Although assessing utility within compressed information is essential, the current methods exhibit limitations in addressing utility-driven quality assessment. Therefore, this letter built a Utility-oriented compacted Image Quality Dataset (UCIQD) that contains utility qualities of reference images and their corresponding compcated information at different levels. The utility score is derived from the average confidence of various object detection models. Then, based on UCIQD, we introduce a Distillation-based Compacted Information Quality assessment metric (DCIQ) for utility-oriented quality evaluation in the context of underwater machine vision. In DCIQ, utility features of compacted information are acquired through transfer learning and mapped using a Transformer. Besides, we propose a utility-oriented cross-model feature fusion mechanism to address different detection algorithm preferences. After that, a utility-oriented feature quality measure assesses compacted feature utility. Finally, we utilize distillation to compress the model by reducing its parameters by 55%. Experiment results effectively demonstrate that our proposed DCIQ can predict utility-oriented quality within compressed underwater information.
Keyword :
Compacted underwater information Compacted underwater information distillation distillation utility-oriented quality assessment utility-oriented quality assessment
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Liao, Honggang , Jiang, Nanfeng , Chen, Weiling et al. Distillation-Based Utility Assessment for Compacted Underwater Information [J]. | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 : 481-485 . |
MLA | Liao, Honggang et al. "Distillation-Based Utility Assessment for Compacted Underwater Information" . | IEEE SIGNAL PROCESSING LETTERS 31 (2024) : 481-485 . |
APA | Liao, Honggang , Jiang, Nanfeng , Chen, Weiling , Wei, Hongan , Zhao, Tiesong . Distillation-Based Utility Assessment for Compacted Underwater Information . | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 , 481-485 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Super-Resolution (SR) algorithms aim to enhance the resolutions of images. Massive deep-learning-based SR techniques have emerged in recent years. In such case, a visually appealing output may contain additional details compared with its reference image. Accordingly, fully referenced Image Quality Assessment (IQA) cannot work well; however, reference information remains essential for evaluating the qualities of SR images. This poses a challenge to SR-IQA: How to balance the referenced and no-reference scores for user perception? In this paper, we propose a Perception-driven Similarity-Clarity Tradeoff (PSCT) model for SR-IQA. Specifically, we investigate this problem from both referenced and no-reference perspectives, and design two deep-learning-based modules to obtain referenced and no-reference scores. We present a theoretical analysis based on Human Visual System (HVS) properties on their tradeoff and also calculate adaptive weights for them. Experimental results indicate that our PSCT model is superior to the state-of-the-arts on SR-IQA. In addition, the proposed PSCT model is also capable of evaluating quality scores in other image enhancement scenarios, such as deraining, dehazing and underwater image enhancement. The source code is available at https://github.com/kekezhang112/PSCT. © 1991-2012 IEEE.
Keyword :
Deep learning Deep learning Demulsification Demulsification Feature extraction Feature extraction Image enhancement Image enhancement Image quality Image quality Job analysis Job analysis Optical resolving power Optical resolving power Quality control Quality control
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Keke , Zhao, Tiesong , Chen, Weiling et al. Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment [J]. | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (7) : 5897-5907 . |
MLA | Zhang, Keke et al. "Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment" . | IEEE Transactions on Circuits and Systems for Video Technology 34 . 7 (2024) : 5897-5907 . |
APA | Zhang, Keke , Zhao, Tiesong , Chen, Weiling , Niu, Yuzhen , Hu, Jinsong , Lin, Weisi . Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment . | IEEE Transactions on Circuits and Systems for Video Technology , 2024 , 34 (7) , 5897-5907 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Due to the light-independent imaging characteristics, sonar images play a crucial role in fields such as underwater detection and rescue. However, the resolution of sonar images is negatively correlated with the imaging distance. To overcome this limitation, Super-Resolution (SR) techniques have been introduced into sonar image processing. Nevertheless, it is not always guaranteed that SR maintains the utility of the image. Therefore, quantifying the utility of SR reconstructed Sonar Images (SRSIs) can facilitate their optimization and usage. Existing Image Quality Assessment (IQA) methods are inadequate for evaluating SRSIs as they fail to consider both the unique characteristics of sonar images and reconstruction artifacts while meeting task requirements. In this paper, we propose a Perception-and-Cognition-inspired quality Assessment method for Sonar image Super-resolution (PCASS). Our approach incorporates a hierarchical feature fusion-based framework inspired by the cognitive process in the human brain to comprehensively evaluate SRSIs' quality under object recognition tasks. Additionally, we select features at each level considering visual perception characteristics introduced by SR reconstruction artifacts such as texture abundance, contour details, and semantic information to measure image quality accurately. Importantly, our method does not require training data and is suitable for scenarios with limited available images. Experimental results validate its superior performance.
Keyword :
hierarchical feature fusion hierarchical feature fusion image quality assessment (IQA) image quality assessment (IQA) Sonar image Sonar image super-resolution (SR) super-resolution (SR) task-oriented task-oriented
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Chen, Weiling , Cai, Boqin , Zheng, Sumei et al. Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 6398-6410 . |
MLA | Chen, Weiling et al. "Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 6398-6410 . |
APA | Chen, Weiling , Cai, Boqin , Zheng, Sumei , Zhao, Tiesong , Gu, Ke . Perception-and-Cognition-Inspired Quality Assessment for Sonar Image Super-Resolution . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 6398-6410 . |
Export to | NoteExpress RIS BibTex |
Version :
Abstract :
Underwater images often suffer from local distortions during the imaging and transmission process, which can negatively impact their quality. Fortunately, it is possible to improve image quality by removing local distortion without making any hardware or software adjustments to the transmission system. However, existing algorithms designed for global distortions are not suitable for addressing local distortions, while end -to -end restoration and inpainting algorithms do not perform satisfactorily on underwater images. To address this issue, this paper proposes a Joint distortion localization and restoration model based on Progressive Guidance (JPG) specifically tailored for underwater imaging and transmission. Our strategy employs a two -stage framework where the first stage focuses exclusively on accurately localizing distortions to obtain precise position. Subsequently, in the second stage, we utilize this position information for effective distortion restoration. To further enhance restoration performance, our approach progressively guides the restoration process by incorporating global, distortion -free as well as distortion -specific information into different components of the second -stage network. The work surpasses current state-of-the-art methods in restoring both mixed and individual distortions.
Keyword :
Distortion localization Distortion localization Progressive guidance Progressive guidance Underwater image restoration Underwater image restoration Underwater local distortion Underwater local distortion
Cite:
Copy from the list or Export to your reference management。
GB/T 7714 | Zhang, Jianghe , Chen, Weiling , Lin, Zuxin et al. Underwater image restoration based on progressive guidance [J]. | SIGNAL PROCESSING , 2024 , 223 . |
MLA | Zhang, Jianghe et al. "Underwater image restoration based on progressive guidance" . | SIGNAL PROCESSING 223 (2024) . |
APA | Zhang, Jianghe , Chen, Weiling , Lin, Zuxin , Wei, Hongan , Zhao, Tiesong . Underwater image restoration based on progressive guidance . | SIGNAL PROCESSING , 2024 , 223 . |
Export to | NoteExpress RIS BibTex |
Version :
Export
Results: |
Selected to |
Format: |