• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:兰诚栋

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 5 >
Saliency-Aware Spatio-Temporal Artifact Detection for Compressed Video Quality Assessment SCIE
期刊论文 | 2023 , 30 , 693-697 | IEEE SIGNAL PROCESSING LETTERS
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Compressed videos often exhibit visually annoying artifacts, known as Perceivable Encoding Artifacts (PEAs), which dramatically degrade video visual quality. Subjective and objective measures capable of identifying and quantifying various types of PEAs are critical in improving visual quality. In this letter, we investigate the influence of four spatial PEAs (i.e. blurring, blocking, bleeding, and ringing) and two temporal PEAs (i.e. flickering and floating) on video quality. For spatial artifacts, we propose a visual saliency model with a low computational cost and higher consistency with human visual perception. In terms of temporal artifacts, self-attention based TimeSFormer is improved to detect temporal artifacts. Based on the six types of PEAs, a quality metric called Saliency-Aware Spatio-Temporal Artifacts Measurement (SSTAM) is proposed. Experimental results demonstrate that the proposed method outperforms state-of-the-art metrics. We believe that SSTAM will be beneficial for optimizing video coding techniques.

Keyword :

compression artifact compression artifact Perceivable Encoding Artifacts (PEAs) Perceivable Encoding Artifacts (PEAs) saliency detection saliency detection Video quality assessment Video quality assessment

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lin, Liqun , Zheng, Yang , Chen, Weiling et al. Saliency-Aware Spatio-Temporal Artifact Detection for Compressed Video Quality Assessment [J]. | IEEE SIGNAL PROCESSING LETTERS , 2023 , 30 : 693-697 .
MLA Lin, Liqun et al. "Saliency-Aware Spatio-Temporal Artifact Detection for Compressed Video Quality Assessment" . | IEEE SIGNAL PROCESSING LETTERS 30 (2023) : 693-697 .
APA Lin, Liqun , Zheng, Yang , Chen, Weiling , Lan, Chengdong , Zhao, Tiesong . Saliency-Aware Spatio-Temporal Artifact Detection for Compressed Video Quality Assessment . | IEEE SIGNAL PROCESSING LETTERS , 2023 , 30 , 693-697 .
Export to NoteExpress RIS BibTex

Version :

A self-attention model for viewport prediction based on distance constraint SCIE
期刊论文 | 2023 | VISUAL COMPUTER
Abstract&Keyword Cite

Abstract :

Panoramic video multimedia technology has made significant advancements in recent years, providing users with an immersive experience by displaying the entire 360 degrees spherical scene centered around their virtual location. However, due to its larger data volume compared to traditional video formats, transmitting high-quality videos requires more bandwidth. It is important to note that users do not see the whole 360 degrees content simultaneously, but only a portion that is within their viewport. To save bandwidth, viewport-based adaptive streaming has become a significant technology that transmits only the viewports of interest to the user in high quality. Therefore, the accuracy of viewport prediction plays a crucial role. However, the performance of viewport prediction is affected by the size of the prediction window, which decreases significantly as the window size increases. In order to address this issue, we propose an effective self-attention viewport prediction model based on distance constraint in this paper. Firstly, by analyzing the existing viewport trajectory dataset, we find the randomness and continuity of the viewport trajectory. Secondly, to solve the randomness problem, we design a viewport prediction model based on a self-attention mechanism to provide more trajectory information for long inputs. Thirdly, in order to ensure the continuity of the predicted viewport trajectory, the loss function is modified with the distance constraint to reduce the change in the continuity of prediction results. Finally, the experimental results based on the real viewport trajectory datasets show that the algorithm we propose has higher prediction accuracy and stability compared with the advanced models.

Keyword :

Distance constraints Distance constraints Panoramic video Panoramic video Self-attention Self-attention Viewport prediction Viewport prediction

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lan, ChengDong , Qiu, Xu , Miao, Chenqi et al. A self-attention model for viewport prediction based on distance constraint [J]. | VISUAL COMPUTER , 2023 .
MLA Lan, ChengDong et al. "A self-attention model for viewport prediction based on distance constraint" . | VISUAL COMPUTER (2023) .
APA Lan, ChengDong , Qiu, Xu , Miao, Chenqi , Zheng, MengTing . A self-attention model for viewport prediction based on distance constraint . | VISUAL COMPUTER , 2023 .
Export to NoteExpress RIS BibTex

Version :

CCA-FPN: Channel and content adaptive object detection SCIE
期刊论文 | 2023 , 95 | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION
Abstract&Keyword Cite

Abstract :

Feature pyramid network (FPN) is a typical detector commonly for solving the issue of object detection at different scales. However, the lateral connections in FPN lead to the loss of feature information due to the reduction of feature channels. Moreover, the top-down feature fusion will weaken the feature representation in the process of feature delivery because of features with different semantic information. In this paper, we propose a feature pyramid network with channel and content adaptive feature enhancement module (CCA-FPN), which uses a channel adaptive guided mechanism module (CAGM) and multi-scale content adaptive feature enhancement module (MCAFEM) to alleviate these problems. We conduct comprehensive experiments on the MS COCO dataset. By replacing FPN with CCA-FPN in ATSS, our models achieve 1.3 percentage points higher Average Precision (AP) when using ResNet50 as backbone. Furthermore, our CCA-FPN achieves 0.3 percentage points higher than the AugFPN which is the state-of-the-art FPN-based detector.

Keyword :

Channel and content adaptive Channel and content adaptive Feature enhancement module Feature enhancement module Feature pyramid network Feature pyramid network Object detection Object detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Ye, Zhiyang , Lan, Chengdong , Zou, Min et al. CCA-FPN: Channel and content adaptive object detection [J]. | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION , 2023 , 95 .
MLA Ye, Zhiyang et al. "CCA-FPN: Channel and content adaptive object detection" . | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION 95 (2023) .
APA Ye, Zhiyang , Lan, Chengdong , Zou, Min , Qiu, Xu , Chen, Jian . CCA-FPN: Channel and content adaptive object detection . | JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION , 2023 , 95 .
Export to NoteExpress RIS BibTex

Version :

应用于水下目标检测的YOLOv5s算法模型
期刊论文 | 2023 , 47 (02) , 39-43 | 电视技术
Abstract&Keyword Cite

Abstract :

在水下图像的目标检测研究中,水下目标的尺度较小及存在模糊的情况,给检测精度带来了较大的挑战。为了解决水下环境使用通用目标检测模型精度较低的问题,提出一种改进的YOLOv5s检测模型。通过在YOLOv5s检测模型中增加多种滤波处理的数据增强,扩充水下数据样本的数量,同时提高数据的泛化性。同时,对分类和回归损失函数进行相应的改进,更好地进行水下目标的分类和定位。经实验验证,改进的方法适用于水下目标检测,改进的YOLOv5s检测算法在检测速度不变的情况下,平均精度提升了2.1%。

Keyword :

YOLOv5s YOLOv5s 损失函数 损失函数 数据增强 数据增强 水下目标检测 水下目标检测

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 叶志杨 , 梁昊霖 , 兰诚栋 . 应用于水下目标检测的YOLOv5s算法模型 [J]. | 电视技术 , 2023 , 47 (02) : 39-43 .
MLA 叶志杨 et al. "应用于水下目标检测的YOLOv5s算法模型" . | 电视技术 47 . 02 (2023) : 39-43 .
APA 叶志杨 , 梁昊霖 , 兰诚栋 . 应用于水下目标检测的YOLOv5s算法模型 . | 电视技术 , 2023 , 47 (02) , 39-43 .
Export to NoteExpress RIS BibTex

Version :

基于光流辅助可变形卷积的压缩视频质量增强
期刊论文 | 2023 , 47 (02) , 24-27 | 电视技术
Abstract&Keyword Cite

Abstract :

视频压缩会给视频带来压缩伪影,从而影响视频质量,但是可以在解码端通过多帧融合让待增强帧从其余帧中学习到高质量的信息从而获得质量提升。基于此,引入光流辅助可变形卷积,使多帧融合达到更好的对齐效果,并充分利用过去以及未来时刻帧对齐后的结果,进一步获得更好的增强效果。

Keyword :

光流法 光流法 压缩视频增强 压缩视频增强 可变形卷积 可变形卷积

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 梁昊霖 , 叶志杨 , 兰诚栋 . 基于光流辅助可变形卷积的压缩视频质量增强 [J]. | 电视技术 , 2023 , 47 (02) : 24-27 .
MLA 梁昊霖 et al. "基于光流辅助可变形卷积的压缩视频质量增强" . | 电视技术 47 . 02 (2023) : 24-27 .
APA 梁昊霖 , 叶志杨 , 兰诚栋 . 基于光流辅助可变形卷积的压缩视频质量增强 . | 电视技术 , 2023 , 47 (02) , 24-27 .
Export to NoteExpress RIS BibTex

Version :

Adaptive Streaming of Stereoscopic Panoramic Video Based on Reinforcement Learning EI CSCD PKU
期刊论文 | 2022 , 44 (4) , 1461-1468 | Journal of Electronics and Information Technology
Abstract&Keyword Cite

Abstract :

Currently, an effective stream adaptation method for stereo panoramic video transmission is missing. However, the traditional panoramic video adaptive streaming strategy for transmitting binocular stereo panoramic video suffers from the problem of doubling the transmission data and requiring huge bandwidth. A multi-agent reinforcement learning based stereo panoramic video asymmetric transmission adaptive streaming method is proposed in this paper to cope with the limited bandwidth and fluctuation of network bandwidth in real time. First, due to the human eye's preference for the saliency regions of video, each tile in the left and right viewpoints of stereoscopic video contributes differently to the perceptual quality, and a tiles-based method for predicting the watching probability of left and right viewpoint is proposed. Second, a multi-agent reinforcement learning framework based on policy-value (Actor-Critic) is designed for joint rate control of left and right viewpoints. Finally, a reasonable reward function is designed based on the model structure and the principle of binocular suppression. The experimental results show that the proposed method is more suitable for tiles-based stereo panoramic video transmission than the traditional self-adaptive stream transmission strategy. A novel approach is proposed for stereo panoramic video joint rate control and user Quality of Experience (QoE) improvement under limited bandwidth. © 2022, Science Press. All right reserved.

Keyword :

Bandwidth Bandwidth Image communication systems Image communication systems Multi agent systems Multi agent systems Quality control Quality control Quality of service Quality of service Reinforcement learning Reinforcement learning Stereo image processing Stereo image processing Video streaming Video streaming

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Lan, Chengdong , Rao, Yingjie , Song, Caixia et al. Adaptive Streaming of Stereoscopic Panoramic Video Based on Reinforcement Learning [J]. | Journal of Electronics and Information Technology , 2022 , 44 (4) : 1461-1468 .
MLA Lan, Chengdong et al. "Adaptive Streaming of Stereoscopic Panoramic Video Based on Reinforcement Learning" . | Journal of Electronics and Information Technology 44 . 4 (2022) : 1461-1468 .
APA Lan, Chengdong , Rao, Yingjie , Song, Caixia , Chen, Jian . Adaptive Streaming of Stereoscopic Panoramic Video Based on Reinforcement Learning . | Journal of Electronics and Information Technology , 2022 , 44 (4) , 1461-1468 .
Export to NoteExpress RIS BibTex

Version :

基于强化学习的立体全景视频自适应流 CSCD PKU
期刊论文 | 2022 , 44 (04) , 1461-1468 | 电子与信息学报
Abstract&Keyword Cite

Abstract :

针对当前立体全景视频传输缺少有效的流自适应方法,且传统全景视频流自适应策略传输双目立体全景视频使得传输数据加倍,所需带宽巨大的问题,该文提出一种基于多智能体强化学习的立体全景视频非对称传输自适应流方法,以实时应对网络带宽波动。首先,根据人眼对视频显著性区域的偏爱,左右视点中每个瓦片(tile)对立体视频的感知质量的贡献度不同,提出一个基于tiles的左右视点观看概率预测方法。其次,设计了一种基于策略-评价(Actor-Critic)的多智能体强化学习框架,对左右视点进行联合码率控制。最后,根据模型结构和双目抑制原理,设计合理的奖励函数。实验结果表明,与传统流自适应传输策略相比,该文所提方法更加...

Keyword :

多智能体强化学习 多智能体强化学习 立体全景视频传输 立体全景视频传输 联合码率控制 联合码率控制 视点预测 视点预测

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 兰诚栋 , 饶迎节 , 宋彩霞 et al. 基于强化学习的立体全景视频自适应流 [J]. | 电子与信息学报 , 2022 , 44 (04) : 1461-1468 .
MLA 兰诚栋 et al. "基于强化学习的立体全景视频自适应流" . | 电子与信息学报 44 . 04 (2022) : 1461-1468 .
APA 兰诚栋 , 饶迎节 , 宋彩霞 , 陈建 . 基于强化学习的立体全景视频自适应流 . | 电子与信息学报 , 2022 , 44 (04) , 1461-1468 .
Export to NoteExpress RIS BibTex

Version :

多视点稀疏测量的图像绘制方法 CSCD PKU
期刊论文 | 2021 , 47 (4) , 882-890 | 自动化学报
Abstract&Keyword Cite

Abstract :

为了减少所需采集的视频数据量,基于图像绘制(Image-based rendering,IBR)的前沿方法将稠密视点信息映射成压缩感知框架中的原始信号,并将稀疏视点图像作为随机测量值,但低维测量信号由所有稠密视点信息线性组合而成,而稀疏视点图像仅仅来源于部分视点信息,导致稀疏视点采集的图像与低维测量信号不一致.本文提出利用间隔采样矩阵消除测量信号与稀疏视点图像位置之间的差异,进而通过约束由测量矩阵和基函数构成的传感矩阵尽量满足有限等距性,使得能够获得原始信号的唯一精确解.仿真实验结果表明,相比于前沿方法,本文提出的方法对于不同复杂程度的场景重建都提高了主客观质量.

Keyword :

压缩感知 压缩感知 基于图像的绘制 基于图像的绘制 多视点图像重建 多视点图像重建 极平面图像 极平面图像

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 兰诚栋 , 林宇鹏 , 方大锐 et al. 多视点稀疏测量的图像绘制方法 [J]. | 自动化学报 , 2021 , 47 (4) : 882-890 .
MLA 兰诚栋 et al. "多视点稀疏测量的图像绘制方法" . | 自动化学报 47 . 4 (2021) : 882-890 .
APA 兰诚栋 , 林宇鹏 , 方大锐 , 陈建 . 多视点稀疏测量的图像绘制方法 . | 自动化学报 , 2021 , 47 (4) , 882-890 .
Export to NoteExpress RIS BibTex

Version :

多视点稀疏测量的图像绘制方法 CSCD PKU
期刊论文 | 2021 , 47 (04) , 882-890 | 自动化学报
Abstract&Keyword Cite

Abstract :

为了减少所需采集的视频数据量,基于图像绘制(Image-based rendering, IBR)的前沿方法将稠密视点信息映射成压缩感知框架中的原始信号,并将稀疏视点图像作为随机测量值,但低维测量信号由所有稠密视点信息线性组合而成,而稀疏视点图像仅仅来源于部分视点信息,导致稀疏视点采集的图像与低维测量信号不一致.本文提出利用间隔采样矩阵消除测量信号与稀疏视点图像位置之间的差异,进而通过约束由测量矩阵和基函数构成的传感矩阵尽量满足有限等距性,使得能够获得原始信号的唯一精确解.仿真实验结果表明,相比于前沿方法,本文提出的方法对于不同复杂程度的场景重建都提高了主客观质量.

Keyword :

压缩感知 压缩感知 基于图像的绘制 基于图像的绘制 多视点图像重建 多视点图像重建 极平面图像 极平面图像

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 兰诚栋 , 林宇鹏 , 方大锐 et al. 多视点稀疏测量的图像绘制方法 [J]. | 自动化学报 , 2021 , 47 (04) : 882-890 .
MLA 兰诚栋 et al. "多视点稀疏测量的图像绘制方法" . | 自动化学报 47 . 04 (2021) : 882-890 .
APA 兰诚栋 , 林宇鹏 , 方大锐 , 陈建 . 多视点稀疏测量的图像绘制方法 . | 自动化学报 , 2021 , 47 (04) , 882-890 .
Export to NoteExpress RIS BibTex

Version :

全景纵向漫游中极线匹配的置信传播算法 CSCD PKU
期刊论文 | 2018 , 30 (03) , 400-407 | 计算机辅助设计与图形学学报
Abstract&Keyword Cite

Abstract :

深度信息的获取是全景纵向漫游的关键基础,为了提高前后场景图像匹配的精确度,提出一种极线匹配的置信传播算法.首先根据对极几何原理,以图像中心点为基准全方向发散构建出前后场景图像的对极线;其次利用对极线路径信息在匹配代价函数中增加垂直方向的匹配代价分量,并用置信传播算法生成视差图;最后通过前后场景图像的几何关系构建视差与深度的计算模型,从而获取深度信息.实验结果表明,与局部优化算法相比,该算法深度图的结构相似性平均提高了0.22,峰值信噪比平均提高了24%,对于前后场景图像深度信息的获取具有更好的效果.

Keyword :

代价函数 代价函数 全景纵向漫游 全景纵向漫游 极线匹配 极线匹配 深度图 深度图 置信传播 置信传播

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 孙强强 , 兰诚栋 , 陈康杰 et al. 全景纵向漫游中极线匹配的置信传播算法 [J]. | 计算机辅助设计与图形学学报 , 2018 , 30 (03) : 400-407 .
MLA 孙强强 et al. "全景纵向漫游中极线匹配的置信传播算法" . | 计算机辅助设计与图形学学报 30 . 03 (2018) : 400-407 .
APA 孙强强 , 兰诚栋 , 陈康杰 , 方大锐 . 全景纵向漫游中极线匹配的置信传播算法 . | 计算机辅助设计与图形学学报 , 2018 , 30 (03) , 400-407 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 5 >

Export

Results:

Selected

to

Format:
Online/Total:537/6660366
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1