Home>Scholars

  • Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

苏立超

讲师(高校)

计算机与大数据学院、软件学院

展开

Total Results: 15

High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

所有字段:(空)

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 2 >
IFE-Net: Integrated feature enhancement network for image manipulation localization SCIE
期刊论文 | 2024 , 153 | IMAGE AND VISION COMPUTING
Abstract&Keyword Cite Version(2)

Abstract :

Image tampering techniques can lead to distorted or misleading information, which in turn poses a threat in many areas, including social, legal and commercial. Numerous image tampering detection algorithms lose important low-level detail information when extracting deep features, reducing the accuracy and robustness of detection. In order to solve the problems of current methods, this paper proposes anew network called IFE-Net to detect three types of tampered images, namely copy-move, heterologous splicing and removal. Firstly, this paper constructs the noise stream using the attention mechanism CBAM to extract and optimize the noise features. The high-level features are extracted by the backbone network of RGB stream, and the FEASPP module is built for capturing and enhancing the features at different scales. In addition, in this paper, the initial features of RGB stream are additionally supervised so as to limit the detection area and reduce the false alarm. Finally, the final prediction results are obtained by fusing the noise features with the RGB features through the Dual Attention Mechanism (DAM) module. Extensive experimental results on multiple standard datasets show that IFE-Net can accurately locate the tampering region and effectively reduce false alarms, demonstrating superior performance.

Keyword :

Attention mechanism Attention mechanism Edge supervision Edge supervision Tampered localization Tampered localization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Su, Lichao , Dai, Chenwei , Yu, Hao et al. IFE-Net: Integrated feature enhancement network for image manipulation localization [J]. | IMAGE AND VISION COMPUTING , 2024 , 153 .
MLA Su, Lichao et al. "IFE-Net: Integrated feature enhancement network for image manipulation localization" . | IMAGE AND VISION COMPUTING 153 (2024) .
APA Su, Lichao , Dai, Chenwei , Yu, Hao , Chen, Yun . IFE-Net: Integrated feature enhancement network for image manipulation localization . | IMAGE AND VISION COMPUTING , 2024 , 153 .
Export to NoteExpress RIS BibTex

Version :

IFE-Net: Integrated feature enhancement network for image manipulation localization Scopus
期刊论文 | 2025 , 153 | Image and Vision Computing
IFE-Net: Integrated feature enhancement network for image manipulation localization EI
期刊论文 | 2025 , 153 | Image and Vision Computing
MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization SCIE
期刊论文 | 2024 , 27 (3) | PATTERN ANALYSIS AND APPLICATIONS
Abstract&Keyword Cite Version(1)

Abstract :

Image splicing forgery, that is, copying some parts of an image into another image, is one of the frequently used tampering methods in image forgery. As a research hotspot in recent years, deep learning has been used in image forgery detection. However, current deep learning methods have two drawbacks: first, they are too simple in feature fusion; second, they rely only on a single cross-entropy loss as the loss function, leading to models prone to overfitting. To address these issues, a image splicing forgery localization method based on multi-scale supervised U-shaped network, named MSU-Net, is proposed in this paper. First, a triple-stream feature extraction module is designed, which combines the noise view and edge information of the input image to extract semantic-related and semantic-agnostic features. Second, a feature hierarchical fusion mechanism is proposed that introduces a channel attention mechanism layer by layer to perceive multi-level manipulation trajectories, avoiding the loss of information in semantic-related and semantic-agnostic shallow features during the convolution process. Finally, a strategy for multi-scale supervision is developed, a boundary artifact localization module is designed to compute the edge loss, and a contrastive learning module is introduced to compute the contrastive loss. Through extensive experiments on several public datasets, MSU-Net demonstrates high accuracy in localizing tampered regions and outperforms state-of-the-art methods. Additional attack experiments show that MSU-Net exhibits good robustness against Gaussian blur, Gaussian noise, and JPEG compression attacks. Besides, MSU-Net is superior in terms of model complexity and localization speed.

Keyword :

Feature hierarchical fusion Feature hierarchical fusion Image splicing forgery localization Image splicing forgery localization Multi-scale supervision Multi-scale supervision U-Net U-Net

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Hao , Su, Lichao , Dai, Chenwei et al. MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization [J]. | PATTERN ANALYSIS AND APPLICATIONS , 2024 , 27 (3) .
MLA Yu, Hao et al. "MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization" . | PATTERN ANALYSIS AND APPLICATIONS 27 . 3 (2024) .
APA Yu, Hao , Su, Lichao , Dai, Chenwei , Wang, Jinli . MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization . | PATTERN ANALYSIS AND APPLICATIONS , 2024 , 27 (3) .
Export to NoteExpress RIS BibTex

Version :

MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization Scopus
期刊论文 | 2024 , 27 (3) | Pattern Analysis and Applications
DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization SCIE
期刊论文 | 2023 , 127 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
WoS CC Cited Count: 5
Abstract&Keyword Cite Version(2)

Abstract :

With the rapid development of image processing technology, it has become increasingly easy to manipulate images, which poses a threat to the stability and security of people's lives. Recent methods have proposed the fusion of RGB and noise features to uncover tampering traces. However, these approaches overlook the characteristics of features at different levels, leading to insufficient feature fusion. To address this problem, this paper proposes a double-stream multilevel feature fusion network (DMFF-Net). Unlike the traditional feature fusion approach, DMFF-Net adopts a graded feature fusion strategy. It classifies features into primary, intermediate, and advanced levels and introduces the Primary Feature Fusion Module (PFFM) and the Advanced Feature Fusion Module (AFFM) to achieve superior fusion results. Additionally, a multisupervision strategy is employed to decode the fused features into level-specific masks, including boundary, regular, and refined masks. The DMFF-Net is validated on publicly available datasets, including CASIA, Columbia, COVERAGE, and NIST16, as well as a real-life manipulated image dataset, IMD20, and achieves AUCs of 84.7%, 99.6%, 86.6%, 87.4% and 82.8%, respectively. Extensive experiments show that our DMFF-Net outperforms state-of-the-art methods in terms of image manipulation localization accuracy and exhibits improved robustness.

Keyword :

Boundary supervision Boundary supervision Graded feature fusion Graded feature fusion Image manipulation localization Image manipulation localization Multisupervision Multisupervision Refinement strategy Refinement strategy

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xia, Xiang , Su, Li Chao , Wang, Shi Ping et al. DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 127 .
MLA Xia, Xiang et al. "DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 127 (2023) .
APA Xia, Xiang , Su, Li Chao , Wang, Shi Ping , Li, Xiao Yan . DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 127 .
Export to NoteExpress RIS BibTex

Version :

DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization Scopus
期刊论文 | 2024 , 127 | Engineering Applications of Artificial Intelligence
DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization EI
期刊论文 | 2024 , 127 | Engineering Applications of Artificial Intelligence
An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud EI CSCD PKU
期刊论文 | 2023 , 49 (8) , 1707-1722 | Acta Automatica Sinica
Abstract&Keyword Cite

Abstract :

Video based point cloud compression (V-PCC) provides an efficient solution for compressing dynamic point clouds, but the projection of V-PCC from 3D to 2D destroys the correlation of 3D inter-frame motion and reduces the performance of inter-frame coding. To solve this problem, we proposes an adaptive segmentation based multi-mode inter-frame coding method for video point cloud to improve V-PCC, and designs a new dynamic point cloud inter-frame encoding framework. Firstly, in order to achieve more accurate block prediction, a block matching method based on adaptive regional segmentation is proposed to find the best matching block; Secondly, in order to further improve the performance of inter coding, a multi-mode inter-frame coding method based on joint attribute rate distortion optimization (RDO) is proposed to increase the prediction accuracy and reduce the bit rate consumption. Experimental results show that the improved algorithm proposed in this paper achieves -22.57% Bjontegaard delta bit rate (BD-BR) gain compared with V-PCC. The algorithm is especially suitable for dynamic point cloud scenes with little change between frames, such as video surveillance and video conference. © 2023 Science Press. All rights reserved.

Keyword :

Electric distortion Electric distortion Image coding Image coding Image compression Image compression Security systems Security systems Signal distortion Signal distortion Video signal processing Video signal processing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Jian , Liao, Yan-Jun , Wang, Kuo et al. An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud [J]. | Acta Automatica Sinica , 2023 , 49 (8) : 1707-1722 .
MLA Chen, Jian et al. "An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud" . | Acta Automatica Sinica 49 . 8 (2023) : 1707-1722 .
APA Chen, Jian , Liao, Yan-Jun , Wang, Kuo , Zheng, Ming-Kui , Su, Li-Chao . An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud . | Acta Automatica Sinica , 2023 , 49 (8) , 1707-1722 .
Export to NoteExpress RIS BibTex

Version :

Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing CPCI-S
期刊论文 | 2023 , 14255 , 167-179 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II
Abstract&Keyword Cite Version(2)

Abstract :

The use of image-splicing technologies had detrimental effects on the security of multimedia information. Hence, it is necessary to develop effective methods for detecting and locating such tampering. Previous studies have mainly focused on the supervisory role of the mask on the model. The mask edges contain rich complementary signals, which help to fully understand the image and are usually ignored. In this paper, we propose a new network named EAU-Net to detect and locat the splicing regions in the image. The proposed network consists of two parts: Edge-guided SegFormer and Sparse-connected U-Net (SCU). Firstly, the feature extraction module captures local detailed cues and global environment information, which are used to deduce the initial location of the affected regions by SegFormer. Secondly, a Sobel-based edge-guided module (EGM) is proposed to guide the network to explore the complementary relationship between splicing regions and their boundaries. Thirdly, in order to achieve more precise positioning results, SCU is used as postprocessing for removing false alarm pixels outside the focusing regions. In addition, we propose an adaptive loss weight adjustment algorithm to supervise the network training, through which the weights of the mask and the mask edge can be automatically adjusted. Extensive experimental results show that the proposed method outperforms the state-of-the-art splicing detection and localization methods in terms of detection accuracy and robustness.

Keyword :

Image manipulation Localization Image manipulation Localization Image splicing forgery detection Image splicing forgery detection Splicing detection Splicing detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wan, Lin , Su, Lichao , Luo, Huan et al. Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing [J]. | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II , 2023 , 14255 : 167-179 .
MLA Wan, Lin et al. "Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing" . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II 14255 (2023) : 167-179 .
APA Wan, Lin , Su, Lichao , Luo, Huan , Li, Xiaoyan . Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II , 2023 , 14255 , 167-179 .
Export to NoteExpress RIS BibTex

Version :

Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing EI
会议论文 | 2023 , 14255 LNCS , 167-179
Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing Scopus
其他 | 2023 , 14255 LNCS , 167-179 | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
DS-Net: Dual supervision neural network for image manipulation localization SCIE
期刊论文 | 2023 , 17 (12) , 3551-3563 | IET IMAGE PROCESSING
Abstract&Keyword Cite Version(3)

Abstract :

With the rapid development of image editing technology, tampering with images has become easier. Maliciously tampered images lead to serious security problems (e.g., when used as evidence). The current mainstream methods of image tampering are divided into three types which are copy-move, splicing and removal. Many image tampering detection methods can only detect one type of image tampering. Additionally, some methods learn features by suppressing image content, which can result in false positives when identifying tampered areas. In this paper, the authors propose a novel framework named the dual supervision neural network (DS-Net) to localize the tampered regions of images tampered by the three tampering methods mentioned above. First, to extract richer multiscale information, the authors add skip connections to the atrous spatial pyramid pooling (ASPP) module. Second, a channel attention mechanism is introduced to dynamically weigh the results generated by ASPP. Finally, the authors build additional supervised branches for high-level features to further enhance the extraction of these high-level features before fusing them with low-level features. The authors conduct experiments on various standard datasets. Through extensive experiments, the results show that the AUC scores reach 86.4%, 95.3% and 99.6% for CASIA, COVERAGE and NIST16 datasets, respectively, and the F1 scores are 56.0%, 73.4% and 82.7%, respectively. The results demonstrate that the authors' method can accurately locate tampered regions and achieve better performance on various datasets than other methods of the same type.

Keyword :

image forensics image forensics image processing image processing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Dai, Chenwei , Su, Lichao , Wu, Bin et al. DS-Net: Dual supervision neural network for image manipulation localization [J]. | IET IMAGE PROCESSING , 2023 , 17 (12) : 3551-3563 .
MLA Dai, Chenwei et al. "DS-Net: Dual supervision neural network for image manipulation localization" . | IET IMAGE PROCESSING 17 . 12 (2023) : 3551-3563 .
APA Dai, Chenwei , Su, Lichao , Wu, Bin , Chen, Jian . DS-Net: Dual supervision neural network for image manipulation localization . | IET IMAGE PROCESSING , 2023 , 17 (12) , 3551-3563 .
Export to NoteExpress RIS BibTex

Version :

DS‐Net: Dual supervision neural network for image manipulation localization
期刊论文 | 2023 , 17 (12) , 3551-3563 | IET Image Processing
DS-Net: Dual supervision neural network for image manipulation localization EI
期刊论文 | 2023 , 17 (12) , 3551-3563 | IET Image Processing
DS-Net: Dual supervision neural network for image manipulation localization Scopus
期刊论文 | 2023 , 17 (12) , 3551-3563 | IET Image Processing
Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes SCIE
期刊论文 | 2023 , 14 (47) , 13860-13869 | CHEMICAL SCIENCE
WoS CC Cited Count: 5
Abstract&Keyword Cite Version(2)

Abstract :

Accurately quantifying microRNA levels in vivo is of great importance for cancer staging and prognosis. However, the low abundance of microRNAs and interference from the complex tumor microenvironment usually limit the real-time quantification of microRNAs in vivo. Herein, for the first time, we develop an ultrasensitive microRNA (miR)-21 activated ratiometric nanoprobe for quantification of the miR-21 concentration in vivo without signal amplification as well as dynamic tracking of its distribution. The core-satellite nanoprobe by miR-21 triggered in situ self-assembly was built on nanogapped gold nanoparticles (AuNNP probe) and gold nanoparticles (AuNP probe). The AuNP probe generated a photoacoustic (PA) signal and ratiometric SERS signal with the variation of miR-21, whereas the AuNNP probe served as an internal standard, enabling ratiometric SERS imaging of miR-21. The absolute concentration of miR-21 in MCF-7 tumor-bearing mice was quantified to be 83.8 +/- 24.6 pM via PA and ratiometric SERS imaging. Our strategy provides a powerful approach for the quantitative detection of microRNAs in vivo, providing a reference for the clinical treatment of cancer.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Liting , Li, Qingqing , Wu, Ying et al. Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes [J]. | CHEMICAL SCIENCE , 2023 , 14 (47) : 13860-13869 .
MLA Zheng, Liting et al. "Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes" . | CHEMICAL SCIENCE 14 . 47 (2023) : 13860-13869 .
APA Zheng, Liting , Li, Qingqing , Wu, Ying , Su, Lichao , Du, Wei , Song, Jibin et al. Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes . | CHEMICAL SCIENCE , 2023 , 14 (47) , 13860-13869 .
Export to NoteExpress RIS BibTex

Version :

Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes Scopus
期刊论文 | 2023 , 14 (47) , 13860-13869 | Chemical Science
Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes EI
期刊论文 | 2023 , 14 (47) , 13860-13869 | Chemical Science
Learning matrix factorization with scalable distance metric and regularizer SCIE
期刊论文 | 2023 , 161 , 254-266 | NEURAL NETWORKS
WoS CC Cited Count: 3
Abstract&Keyword Cite Version(2)

Abstract :

Matrix factorization has always been an encouraging field, which attempts to extract discriminative features from high-dimensional data. However, it suffers from negative generalization ability and high computational complexity when handling large-scale data. In this paper, we propose a learnable deep matrix factorization via the projected gradient descent method, which learns multi-layer low-rank factors from scalable metric distances and flexible regularizers. Accordingly, solving a constrained matrix factorization problem is equivalently transformed into training a neural network with an appropriate activation function induced from the projection onto a feasible set. Distinct from other neural networks, the proposed method activates the connected weights not just the hidden layers. As a result, it is proved that the proposed method can learn several existing well-known matrix factorizations, including singular value decomposition, convex, nonnegative and semi-nonnegative matrix factorizations. Finally, comprehensive experiments demonstrate the superiority of the proposed method against other state-of-the-arts.(c) 2023 Elsevier Ltd. All rights reserved.

Keyword :

Deep learning Deep learning Feature representation Feature representation Learnable auto -encoder Learnable auto -encoder Machine learning Machine learning Matrix factorization Matrix factorization Projected gradient Projected gradient

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Shiping , Zhang, Yunhe , Lin, Xincan et al. Learning matrix factorization with scalable distance metric and regularizer [J]. | NEURAL NETWORKS , 2023 , 161 : 254-266 .
MLA Wang, Shiping et al. "Learning matrix factorization with scalable distance metric and regularizer" . | NEURAL NETWORKS 161 (2023) : 254-266 .
APA Wang, Shiping , Zhang, Yunhe , Lin, Xincan , Su, Lichao , Xiao, Guobao , Zhu, William et al. Learning matrix factorization with scalable distance metric and regularizer . | NEURAL NETWORKS , 2023 , 161 , 254-266 .
Export to NoteExpress RIS BibTex

Version :

Learning matrix factorization with scalable distance metric and regularizer EI
期刊论文 | 2023 , 161 , 254-266 | Neural Networks
Learning matrix factorization with scalable distance metric and regularizer Scopus
期刊论文 | 2023 , 161 , 254-266 | Neural Networks
Tracking Cell Viability for Adipose-Derived Mesenchymal Stem Cell-Based Therapy by Quantitative Fluorescence Imaging in the Second Near-Infrared Window SCIE
期刊论文 | 2022 , 16 (2) , 2889-2900 | ACS NANO
WoS CC Cited Count: 38
Abstract&Keyword Cite Version(1)

Abstract :

Cell survival rate determines engraftment efficiency in adipose-derived mesenchymal stem cell (ADSC)-based regenerative medicine. In vivo monitoring of ADSC viability to achieve effective tissue regeneration is a major challenge for ADSC therapy. Here, we developed an activated near-infrared II (NIR-II) fluorescent nano-particle consisting of lanthanide-based down-conversion nanoparticles (DCNPs) and IR786s (DCNP@IR786s) for cell labeling and real-time tracking of ADSC viability in vivo. In dying ADSCs due to excessive ROS generation, absorption competition-induced emission of IR786s was destroyed, which could turn on the NIR-II fluorescent intensity of DCNPs at 1550 nm by 808 nm laser excitation. In contrast, the NIR-II fluorescent intensity of DCNPs was stable at 1550 nm by 980 nm laser excitation. This ratiometric fluorescent signal was precise and sensitive for tracking ADSC viability in vivo. Significantly, the nanoparticle could be applied to quantitively evaluate stem cell viability in real-time in vivo. Using this method, we successfully sought two small molecules including glutathione and dexamethasone that could improve stem cell engraftment efficiency and enhance ADSC therapy in a liver fibrotic mouse model. Therefore, we provide a potential strategy for real-time in vivo quantitative tracking of stem cell viability in ADSC therapy.

Keyword :

cell viability cell viability fluorescence imaging fluorescence imaging liver fibrosis liver fibrosis mesenchymal stem cell mesenchymal stem cell NIR-II NIR-II

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Liao, Naishun , Su, Lichao , Cao, Yanbing et al. Tracking Cell Viability for Adipose-Derived Mesenchymal Stem Cell-Based Therapy by Quantitative Fluorescence Imaging in the Second Near-Infrared Window [J]. | ACS NANO , 2022 , 16 (2) : 2889-2900 .
MLA Liao, Naishun et al. "Tracking Cell Viability for Adipose-Derived Mesenchymal Stem Cell-Based Therapy by Quantitative Fluorescence Imaging in the Second Near-Infrared Window" . | ACS NANO 16 . 2 (2022) : 2889-2900 .
APA Liao, Naishun , Su, Lichao , Cao, Yanbing , Qiu, Liman , Xie, Rong , Peng, Fang et al. Tracking Cell Viability for Adipose-Derived Mesenchymal Stem Cell-Based Therapy by Quantitative Fluorescence Imaging in the Second Near-Infrared Window . | ACS NANO , 2022 , 16 (2) , 2889-2900 .
Export to NoteExpress RIS BibTex

Version :

Tracking Cell Viability for Adipose-Derived Mesenchymal Stem Cell-Based Therapy by Quantitative Fluorescence Imaging in the Second Near-Infrared Window EI
期刊论文 | 2022 , 16 (2) , 2889-2900 | ACS Nano
Dynamic convolutional capsule network for In-loop filtering in HEVC video codec SCIE
期刊论文 | 2022 , 17 (2) , 439-449 | IET IMAGE PROCESSING
Abstract&Keyword Cite Version(2)

Abstract :

Recently, several in-loop filtering algorithms based on convolutional neural network (CNN) have been proposed to improve the efficiency of HEVC (High Efficiency Video Coding). Conventional CNN-based filters only apply a single model to the whole image, which cannot adapt well to all local features from the image. To solve this problem, an in-loop filtering algorithm based on a dynamic convolutional capsule network (DCC-net) is proposed, which embeds localized dynamic routing and dynamic segmentation algorithms into capsule network, and integrate them into the HEVC hybrid video coding framework as a new in-loop filter. The proposed method brings average 7.9% and 5.9% BD-BR reductions under all intra (AI) and random access (RA) configurations, respectively, as well as, 0.4 dB and 0.2 dB BD-PSNR gains, respectively. In addition, the proposed algorithm has an outstanding performance in terms of time efficiency.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Su, LiChao , Cao, Mengqing , Yu, Yue et al. Dynamic convolutional capsule network for In-loop filtering in HEVC video codec [J]. | IET IMAGE PROCESSING , 2022 , 17 (2) : 439-449 .
MLA Su, LiChao et al. "Dynamic convolutional capsule network for In-loop filtering in HEVC video codec" . | IET IMAGE PROCESSING 17 . 2 (2022) : 439-449 .
APA Su, LiChao , Cao, Mengqing , Yu, Yue , Chen, Jian , Yang, XiuZhi , Wu, Dapeng . Dynamic convolutional capsule network for In-loop filtering in HEVC video codec . | IET IMAGE PROCESSING , 2022 , 17 (2) , 439-449 .
Export to NoteExpress RIS BibTex

Version :

Dynamic convolutional capsule network for In-loop filtering in HEVC video codec Scopus
期刊论文 | 2023 , 17 (2) , 439-449 | IET Image Processing
Dynamic convolutional capsule network for In-loop filtering in HEVC video codec EI
期刊论文 | 2023 , 17 (2) , 439-449 | IET Image Processing
10| 20| 50 per page
< Page ,Total 2 >

Export

Results:

Selected

to

Format:
Online/Total:210/10457291
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1