• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:陈炜玲

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 6 >
Unified No-Reference Quality Assessment for Sonar Imaging and Processing SCIE
期刊论文 | 2025 , 63 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
Abstract&Keyword Cite Version(2)

Abstract :

Sonar technology has been widely used in underwater surface mapping and remote object detection for its light-independent characteristics. Recently, the booming of artificial intelligence further surges sonar image (SI) processing and understanding techniques. However, the intricate marine environments and diverse nonlinear postprocessing operations may degrade the quality of SIs, impeding accurate interpretation of underwater information. Efficient image quality assessment (IQA) methods are crucial for quality monitoring in sonar imaging and processing. Existing IQA methods overlook the unique characteristics of SIs or focus solely on typical distortions in specific scenarios, which limits their generalization capability. In this article, we propose a unified sonar IQA method, which overcomes the challenges posed by diverse distortions. Though degradation conditions are changeable, ideal SIs consistently require certain properties that must be task-centered and exhibit attribute consistency. We derive a comprehensive set of quality attributes from both the task background and visual content of SIs. These attribute features are represented in just ten dimensions and ultimately mapped to the quality score. To validate the effectiveness of our method, we construct the first comprehensive SI dataset. Experimental results demonstrate the superior performance and robustness of the proposed method.

Keyword :

Attribute consistency Attribute consistency Degradation Degradation Distortion Distortion Image quality Image quality image quality assessment (IQA) image quality assessment (IQA) Imaging Imaging Noise Noise Nonlinear distortion Nonlinear distortion no-reference (NR) no-reference (NR) Quality assessment Quality assessment Silicon Silicon Sonar Sonar sonar imaging and processing sonar imaging and processing Sonar measurements Sonar measurements

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cai, Boqin , Chen, Weiling , Zhang, Jianghe et al. Unified No-Reference Quality Assessment for Sonar Imaging and Processing [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
MLA Cai, Boqin et al. "Unified No-Reference Quality Assessment for Sonar Imaging and Processing" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 63 (2025) .
APA Cai, Boqin , Chen, Weiling , Zhang, Jianghe , Junejo, Naveed Ur Rehman , Zhao, Tiesong . Unified No-Reference Quality Assessment for Sonar Imaging and Processing . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2025 , 63 .
Export to NoteExpress RIS BibTex

Version :

Unified No-Reference Quality Assessment for Sonar Imaging and Processing EI
期刊论文 | 2025 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Unified No-Reference Quality Assessment for Sonar Imaging and Processing Scopus
期刊论文 | 2024 , 63 | IEEE Transactions on Geoscience and Remote Sensing
Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition SCIE
期刊论文 | 2025 , 27 , 455-465 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite Version(2)

Abstract :

Unlike vanilla long-tailed recognition trains on imbalanced data but assumes a uniform test class distribution, test-agnostic long-tailed recognition aims to handle arbitrary test class distributions. Existing methods require prior knowledge of test sets for post-adjustment through multi-stage training, resulting in static decisions at the dataset-level. This pipeline overlooks instance diversity and is impractical in real situations. In this work, we introduce Prototype Alignment with Dedicated Experts (PADE), a one-stage framework for test-agnostic long-tailed recognition. PADE tackles unknown test distributions at the instance-level, without depending on test priors. It reformulates the task as a domain detection problem, dynamically adjusting the model for each instance. PADE comprises three main strategies: 1) parameter customization strategy for multi-experts skilled at different categories; 2) normalized target knowledge distillation for mutual guidance among experts while maintaining diversity; 3) re-balanced compactness learning with momentum prototypes, promoting instance alignment with the corresponding class centroid. We evaluate PADE on various long-tailed recognition benchmarks with diverse test distributions. The results verify its effectiveness in both vanilla and test-agnostic long-tailed recognition.

Keyword :

Long-tailed classification Long-tailed classification prototypical learning prototypical learning test-agnostic recognition test-agnostic recognition

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Guo, Chen , Chen, Weiling , Huang, Aiping et al. Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2025 , 27 : 455-465 .
MLA Guo, Chen et al. "Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition" . | IEEE TRANSACTIONS ON MULTIMEDIA 27 (2025) : 455-465 .
APA Guo, Chen , Chen, Weiling , Huang, Aiping , Zhao, Tiesong . Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition . | IEEE TRANSACTIONS ON MULTIMEDIA , 2025 , 27 , 455-465 .
Export to NoteExpress RIS BibTex

Version :

Prototype Alignment with Dedicated Experts for Test-Agnostic Long-Tailed Recognition Scopus
期刊论文 | 2024 , 27 , 455-465 | IEEE Transactions on Multimedia
Prototype Alignment With Dedicated Experts for Test-Agnostic Long-Tailed Recognition EI
期刊论文 | 2025 , 27 , 455-465 | IEEE Transactions on Multimedia
基于感知和记忆的视频动态质量评价 CSCD PKU
期刊论文 | 2024 | 电子学报
Abstract&Keyword Cite Version(1)

Abstract :

由于网络环境的多变性,视频播放过程中容易出现卡顿、比特率波动等情况,严重影响了终端用户的体验质量. 为优化网络资源分配并提升用户观看体验,准确评估视频质量至关重要. 现有的视频质量评价方法主要针对短视频,普遍关注人眼视觉感知特性,较少考虑人类记忆特性对视觉信息的存储和表达能力,以及视觉感知和记忆特性之间的相互作用. 而用户观看长视频的时候,其质量评价需要动态评价,除了考虑感知要素外,还要引入记忆要素.为了更好地衡量长视频的质量评价,本文引入深度网络模型,深入探讨了视频感知和记忆特性对用户观看体验的影响,并基于两者特性提出长视频的动态质量评价模型. 首先,本文设计主观实验,探究在不同视频播放模式下,视觉感知特性和人类记忆特性对用户体验质量的影响,构建了基于用户感知和记忆的视频质量数据库(Video Quality Database with Perception and Memory,PAM-VQD);其次,基于 PAM-VQD 数据库,采用深度学习的方法,结合视觉注意力机制,提取视频的深层感知特征,以精准评估感知对用户体验质量的影响;最后,将前端网络输出的感知质量分数、播放状态以及自卡顿间隔作为三个特征输入长短期记忆网络,以建立视觉感知和记忆特性之间的时间依赖关系. 实验结果表明,所提出的质量评估模型在不同视频播放模式下均能准确预测用户体验质量,且泛化性能良好.

Keyword :

体验质量 体验质量 注意力机制 注意力机制 深度学习 深度学习 视觉感知特性 视觉感知特性 记忆效应 记忆效应

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 林丽群 , 暨书逸 , 何嘉晨 et al. 基于感知和记忆的视频动态质量评价 [J]. | 电子学报 , 2024 .
MLA 林丽群 et al. "基于感知和记忆的视频动态质量评价" . | 电子学报 (2024) .
APA 林丽群 , 暨书逸 , 何嘉晨 , 赵铁松 , 陈炜玲 , 郭宗明 . 基于感知和记忆的视频动态质量评价 . | 电子学报 , 2024 .
Export to NoteExpress RIS BibTex
FUVC: A Flexible Codec for Underwater Video Transmission SCIE
期刊论文 | 2024 , 62 , 18-18 | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
WoS CC Cited Count: 5
Abstract&Keyword Cite Version(2)

Abstract :

Smart oceanic exploration has greatly benefitted from AI-driven underwater image and video processing. However, the volume of underwater video content is subject to narrow-band and time-varying underwater acoustic channels. How to support high-utility video transmission at such a limited capacity is still an open issue. In this article, we propose a Flexible Underwater Video Codec (FUVC) with separate designs for targets-of-interest regions and backgrounds. The encoder locates all targets of interest, compresses their corresponding regions with x.265, and, if bandwidth allows, compresses the background with a lower bitrate. The decoder reconstructs both streams, identifies clean targets of interest, and fuses them with the background via a mask detection and background recovery (MDBR) network. When the background stream is unavailable, the decoder adapts all targets of interest to a virtual background via Poisson blending. Experimental results show that FUVC outperforms other codecs with a lower bitrate at the same quality. It also supports a flexible codec for underwater acoustic channels. The database and the source code are available at https://github.com/z21110008/FUVC.

Keyword :

Ocean exploration Ocean exploration smart oceans smart oceans underwater image processing underwater image processing video coding video coding video compression video compression

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Yannan , Luo, Jiawei , Chen, Weiling et al. FUVC: A Flexible Codec for Underwater Video Transmission [J]. | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 : 18-18 .
MLA Zheng, Yannan et al. "FUVC: A Flexible Codec for Underwater Video Transmission" . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 62 (2024) : 18-18 .
APA Zheng, Yannan , Luo, Jiawei , Chen, Weiling , Li, Zuoyong , Zhao, Tiesong . FUVC: A Flexible Codec for Underwater Video Transmission . | IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING , 2024 , 62 , 18-18 .
Export to NoteExpress RIS BibTex

Version :

FUVC: A Flexible Codec for Underwater Video Transmission Scopus
期刊论文 | 2024 , 62 , 1-1 | IEEE Transactions on Geoscience and Remote Sensing
FUVC: A Flexible Codec for Underwater Video Transmission EI
期刊论文 | 2024 , 62 , 1-11 | IEEE Transactions on Geoscience and Remote Sensing
Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement SCIE
期刊论文 | 2024 , 26 , 5657-5669 | IEEE TRANSACTIONS ON MULTIMEDIA
Abstract&Keyword Cite Version(2)

Abstract :

Recently, many compression algorithms are applied to decrease the cost of video storage and transmission. This will introduce undesirable artifacts, which severely degrade visual quality. Therefore, Video Compression Artifacts Removal (VCAR) aims at reconstructing a high-quality video from its corrupted version of compression. Generally, this task is considered as a vision-related instead of media-related problem. In vision-related research, the visual quality has been significantly improved while the computational complexity and bitrate issues are less considered. In this work, we review the performance constraints of video coding and transfer to evaluate the VCAR outputs. Based on the analyses, we propose a Spatial-Temporal Attention-Guided Enhancement Network (STAGE-Net). First, we employ dynamic filter processing, instead of conventional optical flow method, to reduce the computational cost of VCAR. Second, we introduce self-attention mechanism to design Sequential Residual Attention Blocks (SRABs) to improve visual quality of enhanced video frames with bitrate constraints. Both quantitative and qualitative experimental results have demonstrated the superiority of our proposed method, which achieves high visual qualities and low computational costs.

Keyword :

Bit rate Bit rate Computational complexity Computational complexity Image coding Image coding Task analysis Task analysis Video coding Video coding Video compression Video compression video compression artifacts removal (VCAR) video compression artifacts removal (VCAR) video enhancement video enhancement video quality video quality Visualization Visualization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Jiang, Nanfeng , Chen, Weiling , Lin, Jielian et al. Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement [J]. | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 : 5657-5669 .
MLA Jiang, Nanfeng et al. "Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement" . | IEEE TRANSACTIONS ON MULTIMEDIA 26 (2024) : 5657-5669 .
APA Jiang, Nanfeng , Chen, Weiling , Lin, Jielian , Zhao, Tiesong , Lin, Chia-Wen . Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement . | IEEE TRANSACTIONS ON MULTIMEDIA , 2024 , 26 , 5657-5669 .
Export to NoteExpress RIS BibTex

Version :

Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement EI
期刊论文 | 2024 , 26 , 5657-5669 | IEEE Transactions on Multimedia
Video Compression Artifacts Removal With Spatial-Temporal Attention-Guided Enhancement Scopus
期刊论文 | 2024 , 26 , 5657-5669 | IEEE Transactions on Multimedia
Distillation-Based Utility Assessment for Compacted Underwater Information SCIE
期刊论文 | 2024 , 31 , 481-485 | IEEE SIGNAL PROCESSING LETTERS
Abstract&Keyword Cite Version(2)

Abstract :

The limited bandwidth of underwater acoustic channels poses a challenge to the efficiency of multimedia information transmission. To improve efficiency, the system aims to transmit less data while maintaining image utility at the receiving end. Although assessing utility within compressed information is essential, the current methods exhibit limitations in addressing utility-driven quality assessment. Therefore, this letter built a Utility-oriented compacted Image Quality Dataset (UCIQD) that contains utility qualities of reference images and their corresponding compcated information at different levels. The utility score is derived from the average confidence of various object detection models. Then, based on UCIQD, we introduce a Distillation-based Compacted Information Quality assessment metric (DCIQ) for utility-oriented quality evaluation in the context of underwater machine vision. In DCIQ, utility features of compacted information are acquired through transfer learning and mapped using a Transformer. Besides, we propose a utility-oriented cross-model feature fusion mechanism to address different detection algorithm preferences. After that, a utility-oriented feature quality measure assesses compacted feature utility. Finally, we utilize distillation to compress the model by reducing its parameters by 55%. Experiment results effectively demonstrate that our proposed DCIQ can predict utility-oriented quality within compressed underwater information.

Keyword :

Compacted underwater information Compacted underwater information distillation distillation utility-oriented quality assessment utility-oriented quality assessment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liao, Honggang , Jiang, Nanfeng , Chen, Weiling et al. Distillation-Based Utility Assessment for Compacted Underwater Information [J]. | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 : 481-485 .
MLA Liao, Honggang et al. "Distillation-Based Utility Assessment for Compacted Underwater Information" . | IEEE SIGNAL PROCESSING LETTERS 31 (2024) : 481-485 .
APA Liao, Honggang , Jiang, Nanfeng , Chen, Weiling , Wei, Hongan , Zhao, Tiesong . Distillation-Based Utility Assessment for Compacted Underwater Information . | IEEE SIGNAL PROCESSING LETTERS , 2024 , 31 , 481-485 .
Export to NoteExpress RIS BibTex

Version :

Distillation-Based Utility Assessment for Compacted Underwater Information Scopus
期刊论文 | 2024 , 31 , 481-485 | IEEE Signal Processing Letters
Distillation-Based Utility Assessment for Compacted Underwater Information EI
期刊论文 | 2024 , 31 , 481-485 | IEEE Signal Processing Letters
Underwater image quality optimization: Researches, challenges, and future trends SCIE
期刊论文 | 2024 , 146 | IMAGE AND VISION COMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Underwater images serve as crucial mediums for conveying marine information. Nevertheless, due to the inherent complexity of the underwater environment, underwater images often suffer from various quality degradation phenomena such as color deviation, low contrast, and non-uniform illumination. These degraded underwater images fail to meet the requirements of underwater computer vision applications. Consequently, effective quality optimization of underwater images is of paramount research and analytical value. Based on whether they rely on underwater physical imaging models, underwater image quality optimization techniques can be categorized into underwater image enhancement and underwater image restoration methods. This paper provides a comprehensive review of underwater image enhancement and restoration algorithms, accompanied by a brief introduction to underwater imaging model. Then, we systematically analyze publicly available underwater image datasets and commonly-used quality assessment methodologies. Furthermore, extensive experimental comparisons are carried out to assess the performance of underwater image optimization algorithms and their practical impact on high-level vision tasks. Finally, the challenges and future development trends in this field are discussed. We hope that the efforts made in this paper will provide valuable references for future research and contribute to the innovative advancement of underwater image optimization.

Keyword :

Image quality assessment Image quality assessment Underwater image datasets Underwater image datasets Underwater image enhancement Underwater image enhancement Underwater image restoration Underwater image restoration

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Mingjie , Zhang, Keke , Wei, Hongan et al. Underwater image quality optimization: Researches, challenges, and future trends [J]. | IMAGE AND VISION COMPUTING , 2024 , 146 .
MLA Wang, Mingjie et al. "Underwater image quality optimization: Researches, challenges, and future trends" . | IMAGE AND VISION COMPUTING 146 (2024) .
APA Wang, Mingjie , Zhang, Keke , Wei, Hongan , Chen, Weiling , Zhao, Tiesong . Underwater image quality optimization: Researches, challenges, and future trends . | IMAGE AND VISION COMPUTING , 2024 , 146 .
Export to NoteExpress RIS BibTex

Version :

Underwater image quality optimization: Researches, challenges, and future trends EI
期刊论文 | 2024 , 146 | Image and Vision Computing
Underwater image quality optimization: Researches, challenges, and future trends Scopus
期刊论文 | 2024 , 146 | Image and Vision Computing
Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory EI
期刊论文 | 2024 , 52 (11) , 3727-3740 | Acta Electronica Sinica
Abstract&Keyword Cite Version(1)

Abstract :

Due to the variability of the network environment, video playback is prone to lag and bit rate fluctuations, which seriously affects the quality of end-user experience. In order to optimize network resource allocation and enhance user viewing experience, it is crucial to accurately evaluate video quality. Existing video quality evaluation methods mainly focus on the visual perception characteristics of short videos, with less consideration of the ability of human memory characteristics to store and express visual information, and the interaction between visual perception and memory characteristics. In contrast, when users watch long videos, video quality evaluation needs dynamic evaluation, which needs to consider both perceptual and memory elements. To better measure the quality evaluation of long videos, we introduce a deep network model to deeply explore the impact of video perception and memory characteristics on users' viewing experience, and proposes a dynamic quality evaluation model for long videos based on these two characteristics. Firstly, we design subjective experiments to investigate the influence of visual perceptual features and human memory features on user experience quality under different video playback modes, and constructs a video quality database with perception and memory (PAM-VQD) based on user perception and memory. Secondly, based on the PAM-VQD database, a deep learning methodology is utilized to extract deep perceptual features of videos, combined with visual attention mechanism, in order to accurately evaluate the impact of perception on user experience quality. Finally, the three features of perceptual quality score, playback status and self-lag interval output from the front-end network are fed into the long short-term memory network to establish the temporal dependency between visual perception and memory features. The experimental results show that the proposed quality assessment model can accurately predict the user experience quality under different video playback modes with good generalization performance. © 2024 Chinese Institute of Electronics. All rights reserved.

Keyword :

Long short-term memory Long short-term memory Memory architecture Memory architecture Resource allocation Resource allocation Video analysis Video analysis Video recording Video recording

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Li-Qun , Ji, Shu-Yi , He, Jia-Chen et al. Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory [J]. | Acta Electronica Sinica , 2024 , 52 (11) : 3727-3740 .
MLA Lin, Li-Qun et al. "Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory" . | Acta Electronica Sinica 52 . 11 (2024) : 3727-3740 .
APA Lin, Li-Qun , Ji, Shu-Yi , He, Jia-Chen , Zhao, Tie-Song , Chen, Wei-Ling , Guo, Chong-Ming . Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory . | Acta Electronica Sinica , 2024 , 52 (11) , 3727-3740 .
Export to NoteExpress RIS BibTex

Version :

Research of Video Dynamic Quality Evaluation Based on Human Perception and Memory; [基于感知和记忆的视频动态质量评价] Scopus
期刊论文 | 2024 , 52 (11) , 3727-3740 | Acta Electronica Sinica
Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment SCIE
期刊论文 | 2024 , 34 (7) , 5897-5907 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract&Keyword Cite Version(2)

Abstract :

Super-Resolution (SR) algorithms aim to enhance the resolutions of images. Massive deep-learning-based SR techniques have emerged in recent years. In such case, a visually appealing output may contain additional details compared with its reference image. Accordingly, fully referenced Image Quality Assessment (IQA) cannot work well; however, reference information remains essential for evaluating the qualities of SR images. This poses a challenge to SR-IQA: How to balance the referenced and no-reference scores for user perception? In this paper, we propose a Perception-driven Similarity-Clarity Tradeoff (PSCT) model for SR-IQA. Specifically, we investigate this problem from both referenced and no-reference perspectives, and design two deep-learning-based modules to obtain referenced and no-reference scores. We present a theoretical analysis based on Human Visual System (HVS) properties on their tradeoff and also calculate adaptive weights for them. Experimental results indicate that our PSCT model is superior to the state-of-the-arts on SR-IQA. In addition, the proposed PSCT model is also capable of evaluating quality scores in other image enhancement scenarios, such as deraining, dehazing and underwater image enhancement. The source code is available at https://github.com/kekezhang112/PSCT.

Keyword :

Adaptation models Adaptation models Distortion Distortion Feature extraction Feature extraction Image quality assessment Image quality assessment image super-resolution image super-resolution Measurement Measurement perception-driven perception-driven Quality assessment Quality assessment similarity-clarity tradeoff similarity-clarity tradeoff Superresolution Superresolution Task analysis Task analysis

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zhang, Keke , Zhao, Tiesong , Chen, Weiling et al. Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) : 5897-5907 .
MLA Zhang, Keke et al. "Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 7 (2024) : 5897-5907 .
APA Zhang, Keke , Zhao, Tiesong , Chen, Weiling , Niu, Yuzhen , Hu, Jinsong , Lin, Weisi . Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (7) , 5897-5907 .
Export to NoteExpress RIS BibTex

Version :

Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment EI
期刊论文 | 2024 , 34 (7) , 5897-5907 | IEEE Transactions on Circuits and Systems for Video Technology
Perception-Driven Similarity-Clarity Tradeoff for Image Super-Resolution Quality Assessment Scopus
期刊论文 | 2024 , 34 (7) , 5897-5907 | IEEE Transactions on Circuits and Systems for Video Technology
"5G+人工智能"时代的教学新挑战
期刊论文 | 2024 , (40) , 42-46 | 教育教学论坛
Abstract&Keyword Cite

Abstract :

在"中国制造2025"的国家需求及福建省海西地方经济和产业升级需求的背景下,传统的信号与信息处理专业的培养方式对未来所需的人才品质存在不适应性.通过分析信号与信息处理专业教学体系现状,以福州大学为例,研究人工智能时代的信号专业教育教学改革机制,分别从学位点建设、课程建设、培养方案、培养目标、课程体系等方面探讨了教学改革机制,从而为高等院校培养信号与信息处理方向的综合型创新人才提供参考.

Keyword :

5G 5G 人工智能 人工智能 信号与信息处理专业 信号与信息处理专业 教学改革 教学改革 课程思政 课程思政

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 陈炜玲 , 林丽群 , 赵铁松 . "5G+人工智能"时代的教学新挑战 [J]. | 教育教学论坛 , 2024 , (40) : 42-46 .
MLA 陈炜玲 et al. ""5G+人工智能"时代的教学新挑战" . | 教育教学论坛 40 (2024) : 42-46 .
APA 陈炜玲 , 林丽群 , 赵铁松 . "5G+人工智能"时代的教学新挑战 . | 教育教学论坛 , 2024 , (40) , 42-46 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 6 >

Export

Results:

Selected

to

Format:
Online/Total:51/10095186
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1