• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:苏立超

Refining:

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 2 >
A Progressive Multiscale Fusion Network for Image Manipulation Localization EI
会议论文 | 2025 , 352-357 | 5th International Conference on Neural Networks, Information and Communication Engineering, NNICE 2025
Abstract&Keyword Cite

Abstract :

With the increasing number of fake images on the Internet, the detection and localization of such images have become a topic worthy of attention. However, existing methods generally have the following problems: Single-type detection struggles to address the complexities of diverse real-world scenarios; Over-reliance on specific situations limits the practical effectiveness of statistical methods in Image Manipulation Localization; The backbone feature extraction network during training often misidentifies high-contrast regions as manipulated areas. In response to these problems, this paper introduces a novel approach named Progressive Multiscale Fusion Network for image manipulation localization. To begin with, an Edge Trace Block is designed to extract multiscale edge features and perform edge supervision so that PMF-Net can obtain global context information on edge parts, including trusted tampering edge clues. Subsequently, we propose an innovative approach named Attention Fusion Block that fuses the features of two different sources using an attention map, and then further extracts the tampering-related information with lightweight attention. Extensive experiments show that our method outperforms state-of-the-art works in both localization performance and robustness on several benchmark datasets. © 2025 IEEE.

Keyword :

Benchmarking Benchmarking Digital forensics Digital forensics Feature extraction Feature extraction Statistical methods Statistical methods

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xie, Cenyan , Su, Lichao , Guo, Chen . A Progressive Multiscale Fusion Network for Image Manipulation Localization [C] . 2025 : 352-357 .
MLA Xie, Cenyan 等. "A Progressive Multiscale Fusion Network for Image Manipulation Localization" . (2025) : 352-357 .
APA Xie, Cenyan , Su, Lichao , Guo, Chen . A Progressive Multiscale Fusion Network for Image Manipulation Localization . (2025) : 352-357 .
Export to NoteExpress RIS BibTex

Version :

DMU-Net: a dual stream multi-scale U-Net for image splicing forgery localization SCIE
期刊论文 | 2025 , 36 (4) | MACHINE VISION AND APPLICATIONS
Abstract&Keyword Cite

Abstract :

With advancements in image processing and the proliferation of editing software, image splicing forgery has become increasingly facile to execute yet harder to detect, thereby impacting societal security. Effective detection and localization methods are urgently needed. Existing methods, while somewhat effective, often over-rely on semantic features, overlook shallow features, and struggle to adapt to varying tampered region sizes. To address these issues, we propose a two-stream image splicing forgery localization network, named DMU-Net. The network first introduces a noise stream as a supplementary feature alongside the RGB stream to provide a richer feature representation. Subsequently, we improve the Atrous Spatial Pyramid Pooling module by incorporating an attention mechanism that enables the model to obtain feature maps of different scales, effectively use context information, and enhance the ability to capture tampered regions of various sizes. Finally, we employ a dual attention mechanism to fuse features from both the encoder and decoder stages; this approach effectively leverages shallow features for fusing coarse-grained and fine-grained features, thus enhancing the model's ability to capture features across different dimensions. Extensive experimental results show that the proposed method outperforms the state-of-the-art image splicing localization methods in terms of detection accuracy and robustness.

Keyword :

Attention mechanism Attention mechanism Feature fusion Feature fusion Image splicing forgery localization Image splicing forgery localization Multi-scale Multi-scale

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Niankang , Su, Lichao , Wang, Jinli et al. DMU-Net: a dual stream multi-scale U-Net for image splicing forgery localization [J]. | MACHINE VISION AND APPLICATIONS , 2025 , 36 (4) .
MLA Yu, Niankang et al. "DMU-Net: a dual stream multi-scale U-Net for image splicing forgery localization" . | MACHINE VISION AND APPLICATIONS 36 . 4 (2025) .
APA Yu, Niankang , Su, Lichao , Wang, Jinli , Huang, Liming . DMU-Net: a dual stream multi-scale U-Net for image splicing forgery localization . | MACHINE VISION AND APPLICATIONS , 2025 , 36 (4) .
Export to NoteExpress RIS BibTex

Version :

Molecular Engineering of Direct Activated NIR-II Chemiluminescence Platform for In Vivo Chemiluminescence-fluorescence Duplex Imaging SCIE
期刊论文 | 2025 , 16 (1) | NATURE COMMUNICATIONS
WoS CC Cited Count: 17
Abstract&Keyword Cite

Abstract :

Chemiluminescence (CL) is a self-illuminating phenomenon fueled by chemical energy instead of extra excited light, which features superiority in sensitivity, signal-to-background ratios, and imaging depth. Strategies to synthesize a CL emission unimolecular skeleton in the second near-infrared window (NIR-II) and a unimolecular probe with direct duplex NIR-II [CL/fluorescence (FL)] emission are lacking. Here, we employ modular synthesis routes to construct a series of directly activated NIR-II CL emission unimolecular probes with a maximum emission wavelength of up to 1060 nm, and use them for real-time and continuous detection of the superoxide anion generated in acetaminophen induced liver injury in a female mice model under both NIR-II CL and NIR-II FL imaging channels. Thus, this study establishes a directly activatable NIR-II CL emission unimolecular skeleton, validating the scalability of this duplex NIR-II CL/FL imaging platform in bioactive molecule detection and disease diagnosis.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Zhongxiang , Li, Qian , Wu, Ying et al. Molecular Engineering of Direct Activated NIR-II Chemiluminescence Platform for In Vivo Chemiluminescence-fluorescence Duplex Imaging [J]. | NATURE COMMUNICATIONS , 2025 , 16 (1) .
MLA Chen, Zhongxiang et al. "Molecular Engineering of Direct Activated NIR-II Chemiluminescence Platform for In Vivo Chemiluminescence-fluorescence Duplex Imaging" . | NATURE COMMUNICATIONS 16 . 1 (2025) .
APA Chen, Zhongxiang , Li, Qian , Wu, Ying , Liu, Jianyong , Liu, Luntao , Su, Lichao et al. Molecular Engineering of Direct Activated NIR-II Chemiluminescence Platform for In Vivo Chemiluminescence-fluorescence Duplex Imaging . | NATURE COMMUNICATIONS , 2025 , 16 (1) .
Export to NoteExpress RIS BibTex

Version :

MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization SCIE
期刊论文 | 2024 , 27 (3) | PATTERN ANALYSIS AND APPLICATIONS
WoS CC Cited Count: 2
Abstract&Keyword Cite

Abstract :

Image splicing forgery, that is, copying some parts of an image into another image, is one of the frequently used tampering methods in image forgery. As a research hotspot in recent years, deep learning has been used in image forgery detection. However, current deep learning methods have two drawbacks: first, they are too simple in feature fusion; second, they rely only on a single cross-entropy loss as the loss function, leading to models prone to overfitting. To address these issues, a image splicing forgery localization method based on multi-scale supervised U-shaped network, named MSU-Net, is proposed in this paper. First, a triple-stream feature extraction module is designed, which combines the noise view and edge information of the input image to extract semantic-related and semantic-agnostic features. Second, a feature hierarchical fusion mechanism is proposed that introduces a channel attention mechanism layer by layer to perceive multi-level manipulation trajectories, avoiding the loss of information in semantic-related and semantic-agnostic shallow features during the convolution process. Finally, a strategy for multi-scale supervision is developed, a boundary artifact localization module is designed to compute the edge loss, and a contrastive learning module is introduced to compute the contrastive loss. Through extensive experiments on several public datasets, MSU-Net demonstrates high accuracy in localizing tampered regions and outperforms state-of-the-art methods. Additional attack experiments show that MSU-Net exhibits good robustness against Gaussian blur, Gaussian noise, and JPEG compression attacks. Besides, MSU-Net is superior in terms of model complexity and localization speed.

Keyword :

Feature hierarchical fusion Feature hierarchical fusion Image splicing forgery localization Image splicing forgery localization Multi-scale supervision Multi-scale supervision U-Net U-Net

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yu, Hao , Su, Lichao , Dai, Chenwei et al. MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization [J]. | PATTERN ANALYSIS AND APPLICATIONS , 2024 , 27 (3) .
MLA Yu, Hao et al. "MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization" . | PATTERN ANALYSIS AND APPLICATIONS 27 . 3 (2024) .
APA Yu, Hao , Su, Lichao , Dai, Chenwei , Wang, Jinli . MSU-Net: the multi-scale supervised U-Net for image splicing forgery localization . | PATTERN ANALYSIS AND APPLICATIONS , 2024 , 27 (3) .
Export to NoteExpress RIS BibTex

Version :

IFE-Net: Integrated feature enhancement network for image manipulation localization SCIE
期刊论文 | 2024 , 153 | IMAGE AND VISION COMPUTING
Abstract&Keyword Cite

Abstract :

Image tampering techniques can lead to distorted or misleading information, which in turn poses a threat in many areas, including social, legal and commercial. Numerous image tampering detection algorithms lose important low-level detail information when extracting deep features, reducing the accuracy and robustness of detection. In order to solve the problems of current methods, this paper proposes anew network called IFE-Net to detect three types of tampered images, namely copy-move, heterologous splicing and removal. Firstly, this paper constructs the noise stream using the attention mechanism CBAM to extract and optimize the noise features. The high-level features are extracted by the backbone network of RGB stream, and the FEASPP module is built for capturing and enhancing the features at different scales. In addition, in this paper, the initial features of RGB stream are additionally supervised so as to limit the detection area and reduce the false alarm. Finally, the final prediction results are obtained by fusing the noise features with the RGB features through the Dual Attention Mechanism (DAM) module. Extensive experimental results on multiple standard datasets show that IFE-Net can accurately locate the tampering region and effectively reduce false alarms, demonstrating superior performance.

Keyword :

Attention mechanism Attention mechanism Edge supervision Edge supervision Tampered localization Tampered localization

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Su, Lichao , Dai, Chenwei , Yu, Hao et al. IFE-Net: Integrated feature enhancement network for image manipulation localization [J]. | IMAGE AND VISION COMPUTING , 2024 , 153 .
MLA Su, Lichao et al. "IFE-Net: Integrated feature enhancement network for image manipulation localization" . | IMAGE AND VISION COMPUTING 153 (2024) .
APA Su, Lichao , Dai, Chenwei , Yu, Hao , Chen, Yun . IFE-Net: Integrated feature enhancement network for image manipulation localization . | IMAGE AND VISION COMPUTING , 2024 , 153 .
Export to NoteExpress RIS BibTex

Version :

Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes SCIE
期刊论文 | 2023 , 14 (47) , 13860-13869 | CHEMICAL SCIENCE
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

Accurately quantifying microRNA levels in vivo is of great importance for cancer staging and prognosis. However, the low abundance of microRNAs and interference from the complex tumor microenvironment usually limit the real-time quantification of microRNAs in vivo. Herein, for the first time, we develop an ultrasensitive microRNA (miR)-21 activated ratiometric nanoprobe for quantification of the miR-21 concentration in vivo without signal amplification as well as dynamic tracking of its distribution. The core-satellite nanoprobe by miR-21 triggered in situ self-assembly was built on nanogapped gold nanoparticles (AuNNP probe) and gold nanoparticles (AuNP probe). The AuNP probe generated a photoacoustic (PA) signal and ratiometric SERS signal with the variation of miR-21, whereas the AuNNP probe served as an internal standard, enabling ratiometric SERS imaging of miR-21. The absolute concentration of miR-21 in MCF-7 tumor-bearing mice was quantified to be 83.8 +/- 24.6 pM via PA and ratiometric SERS imaging. Our strategy provides a powerful approach for the quantitative detection of microRNAs in vivo, providing a reference for the clinical treatment of cancer.

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Zheng, Liting , Li, Qingqing , Wu, Ying et al. Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes [J]. | CHEMICAL SCIENCE , 2023 , 14 (47) : 13860-13869 .
MLA Zheng, Liting et al. "Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes" . | CHEMICAL SCIENCE 14 . 47 (2023) : 13860-13869 .
APA Zheng, Liting , Li, Qingqing , Wu, Ying , Su, Lichao , Du, Wei , Song, Jibin et al. Quantitative detection of microRNA-21 in vivo using in situ assembled photoacoustic and SERS nanoprobes . | CHEMICAL SCIENCE , 2023 , 14 (47) , 13860-13869 .
Export to NoteExpress RIS BibTex

Version :

Learning matrix factorization with scalable distance metric and regularizer SCIE
期刊论文 | 2023 , 161 , 254-266 | NEURAL NETWORKS
WoS CC Cited Count: 3
Abstract&Keyword Cite

Abstract :

Matrix factorization has always been an encouraging field, which attempts to extract discriminative features from high-dimensional data. However, it suffers from negative generalization ability and high computational complexity when handling large-scale data. In this paper, we propose a learnable deep matrix factorization via the projected gradient descent method, which learns multi-layer low-rank factors from scalable metric distances and flexible regularizers. Accordingly, solving a constrained matrix factorization problem is equivalently transformed into training a neural network with an appropriate activation function induced from the projection onto a feasible set. Distinct from other neural networks, the proposed method activates the connected weights not just the hidden layers. As a result, it is proved that the proposed method can learn several existing well-known matrix factorizations, including singular value decomposition, convex, nonnegative and semi-nonnegative matrix factorizations. Finally, comprehensive experiments demonstrate the superiority of the proposed method against other state-of-the-arts.(c) 2023 Elsevier Ltd. All rights reserved.

Keyword :

Deep learning Deep learning Feature representation Feature representation Learnable auto -encoder Learnable auto -encoder Machine learning Machine learning Matrix factorization Matrix factorization Projected gradient Projected gradient

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Shiping , Zhang, Yunhe , Lin, Xincan et al. Learning matrix factorization with scalable distance metric and regularizer [J]. | NEURAL NETWORKS , 2023 , 161 : 254-266 .
MLA Wang, Shiping et al. "Learning matrix factorization with scalable distance metric and regularizer" . | NEURAL NETWORKS 161 (2023) : 254-266 .
APA Wang, Shiping , Zhang, Yunhe , Lin, Xincan , Su, Lichao , Xiao, Guobao , Zhu, William et al. Learning matrix factorization with scalable distance metric and regularizer . | NEURAL NETWORKS , 2023 , 161 , 254-266 .
Export to NoteExpress RIS BibTex

Version :

An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud EI CSCD PKU
期刊论文 | 2023 , 49 (8) , 1707-1722 | Acta Automatica Sinica
Abstract&Keyword Cite

Abstract :

Video based point cloud compression (V-PCC) provides an efficient solution for compressing dynamic point clouds, but the projection of V-PCC from 3D to 2D destroys the correlation of 3D inter-frame motion and reduces the performance of inter-frame coding. To solve this problem, we proposes an adaptive segmentation based multi-mode inter-frame coding method for video point cloud to improve V-PCC, and designs a new dynamic point cloud inter-frame encoding framework. Firstly, in order to achieve more accurate block prediction, a block matching method based on adaptive regional segmentation is proposed to find the best matching block; Secondly, in order to further improve the performance of inter coding, a multi-mode inter-frame coding method based on joint attribute rate distortion optimization (RDO) is proposed to increase the prediction accuracy and reduce the bit rate consumption. Experimental results show that the improved algorithm proposed in this paper achieves -22.57% Bjontegaard delta bit rate (BD-BR) gain compared with V-PCC. The algorithm is especially suitable for dynamic point cloud scenes with little change between frames, such as video surveillance and video conference. © 2023 Science Press. All rights reserved.

Keyword :

Electric distortion Electric distortion Image coding Image coding Image compression Image compression Security systems Security systems Signal distortion Signal distortion Video signal processing Video signal processing

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Jian , Liao, Yan-Jun , Wang, Kuo et al. An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud [J]. | Acta Automatica Sinica , 2023 , 49 (8) : 1707-1722 .
MLA Chen, Jian et al. "An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud" . | Acta Automatica Sinica 49 . 8 (2023) : 1707-1722 .
APA Chen, Jian , Liao, Yan-Jun , Wang, Kuo , Zheng, Ming-Kui , Su, Li-Chao . An Adaptive Segmentation Based Multi-mode Inter-frame Coding Method for Video Point Cloud . | Acta Automatica Sinica , 2023 , 49 (8) , 1707-1722 .
Export to NoteExpress RIS BibTex

Version :

DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization SCIE
期刊论文 | 2023 , 127 | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
WoS CC Cited Count: 5
Abstract&Keyword Cite

Abstract :

With the rapid development of image processing technology, it has become increasingly easy to manipulate images, which poses a threat to the stability and security of people's lives. Recent methods have proposed the fusion of RGB and noise features to uncover tampering traces. However, these approaches overlook the characteristics of features at different levels, leading to insufficient feature fusion. To address this problem, this paper proposes a double-stream multilevel feature fusion network (DMFF-Net). Unlike the traditional feature fusion approach, DMFF-Net adopts a graded feature fusion strategy. It classifies features into primary, intermediate, and advanced levels and introduces the Primary Feature Fusion Module (PFFM) and the Advanced Feature Fusion Module (AFFM) to achieve superior fusion results. Additionally, a multisupervision strategy is employed to decode the fused features into level-specific masks, including boundary, regular, and refined masks. The DMFF-Net is validated on publicly available datasets, including CASIA, Columbia, COVERAGE, and NIST16, as well as a real-life manipulated image dataset, IMD20, and achieves AUCs of 84.7%, 99.6%, 86.6%, 87.4% and 82.8%, respectively. Extensive experiments show that our DMFF-Net outperforms state-of-the-art methods in terms of image manipulation localization accuracy and exhibits improved robustness.

Keyword :

Boundary supervision Boundary supervision Graded feature fusion Graded feature fusion Image manipulation localization Image manipulation localization Multisupervision Multisupervision Refinement strategy Refinement strategy

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Xia, Xiang , Su, Li Chao , Wang, Shi Ping et al. DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization [J]. | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 127 .
MLA Xia, Xiang et al. "DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization" . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE 127 (2023) .
APA Xia, Xiang , Su, Li Chao , Wang, Shi Ping , Li, Xiao Yan . DMFF-Net: Double-stream multilevel feature fusion network for image forgery localization . | ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE , 2023 , 127 .
Export to NoteExpress RIS BibTex

Version :

Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing CPCI-S
期刊论文 | 2023 , 14255 , 167-179 | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II
Abstract&Keyword Cite

Abstract :

The use of image-splicing technologies had detrimental effects on the security of multimedia information. Hence, it is necessary to develop effective methods for detecting and locating such tampering. Previous studies have mainly focused on the supervisory role of the mask on the model. The mask edges contain rich complementary signals, which help to fully understand the image and are usually ignored. In this paper, we propose a new network named EAU-Net to detect and locat the splicing regions in the image. The proposed network consists of two parts: Edge-guided SegFormer and Sparse-connected U-Net (SCU). Firstly, the feature extraction module captures local detailed cues and global environment information, which are used to deduce the initial location of the affected regions by SegFormer. Secondly, a Sobel-based edge-guided module (EGM) is proposed to guide the network to explore the complementary relationship between splicing regions and their boundaries. Thirdly, in order to achieve more precise positioning results, SCU is used as postprocessing for removing false alarm pixels outside the focusing regions. In addition, we propose an adaptive loss weight adjustment algorithm to supervise the network training, through which the weights of the mask and the mask edge can be automatically adjusted. Extensive experimental results show that the proposed method outperforms the state-of-the-art splicing detection and localization methods in terms of detection accuracy and robustness.

Keyword :

Image manipulation Localization Image manipulation Localization Image splicing forgery detection Image splicing forgery detection Splicing detection Splicing detection

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wan, Lin , Su, Lichao , Luo, Huan et al. Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing [J]. | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II , 2023 , 14255 : 167-179 .
MLA Wan, Lin et al. "Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing" . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II 14255 (2023) : 167-179 .
APA Wan, Lin , Su, Lichao , Luo, Huan , Li, Xiaoyan . Combining Edge-Guided Attention and Sparse-Connected U-Net for Detection of Image Splicing . | ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2023, PT II , 2023 , 14255 , 167-179 .
Export to NoteExpress RIS BibTex

Version :

10| 20| 50 per page
< Page ,Total 2 >

Export

Results:

Selected

to

Format:
Online/Total:555/13572897
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1