• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索
High Impact Results & Cited Count Trend for Year Keyword Cloud and Partner Relationship

Query:

学者姓名:程航

Refining:

Source

Submit Unfold

Co-

Submit Unfold

Language

Submit

Clean All

Sort by:
Default
  • Default
  • Title
  • Year
  • WOS Cited Count
  • Impact factor
  • Ascending
  • Descending
< Page ,Total 7 >
EAN: Edge-Aware Network for Image Manipulation Localization SCIE
期刊论文 | 2025 , 35 (2) , 1591-1601 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract&Keyword Cite Version(2)

Abstract :

Image manipulation has sparked widespread concern due to its potential security threats on the Internet. The boundary between the authentic and manipulated region exhibits artifacts in image manipulation localization (IML). These artifacts are more pronounced in heterogeneous image splicing and homogeneous image copy-move manipulation, while they are more subtle in removal and inpainting manipulated images. However, existing methods for image manipulation detection tend to capture boundary artifacts via explicit edge features and have limitations in effectively addressing subtle artifacts. Besides, feature redundancy caused by the powerful feature extraction capability of large models may prevent accurate identification of manipulated artifacts, exhibiting a high false-positive rate. To solve these problems, we propose a novel edge-aware network (EAN) to capture boundary artifacts effectively. This network treats the image manipulation localization problem as a segmentation problem inside and outside the boundary. In EAN, we develop an edge-aware mechanism to refine implicit and explicit edge features by the interaction of adjacent features. This approach directs the encoder to prioritize the desired edge information. Also, we design a multi-feature fusion strategy combined with an improved attention mechanism to enhance key feature representation significantly for mitigating the effects of feature redundancy. We perform thorough experiments on diverse datasets, and the outcomes confirm the efficacy of the suggested approach, surpassing leading manipulation localization techniques in the majority of scenarios.

Keyword :

attention mechanism attention mechanism Attention mechanisms Attention mechanisms convolutional neural network convolutional neural network Discrete wavelet transforms Discrete wavelet transforms Feature extraction Feature extraction feature fusion feature fusion Image edge detection Image edge detection Image manipulation localization Image manipulation localization Location awareness Location awareness Neural networks Neural networks Noise Noise Semantics Semantics Splicing Splicing Transformers Transformers

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Yun , Cheng, Hang , Wang, Haichou et al. EAN: Edge-Aware Network for Image Manipulation Localization [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2025 , 35 (2) : 1591-1601 .
MLA Chen, Yun et al. "EAN: Edge-Aware Network for Image Manipulation Localization" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 35 . 2 (2025) : 1591-1601 .
APA Chen, Yun , Cheng, Hang , Wang, Haichou , Liu, Ximeng , Chen, Fei , Li, Fengyong et al. EAN: Edge-Aware Network for Image Manipulation Localization . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2025 , 35 (2) , 1591-1601 .
Export to NoteExpress RIS BibTex

Version :

EAN: Edge-Aware Network for Image Manipulation Localization EI
期刊论文 | 2025 , 35 (2) , 1591-1601 | IEEE Transactions on Circuits and Systems for Video Technology
EAN: Edge-Aware Network for Image Manipulation Localization Scopus
期刊论文 | 2024 , 35 (2) , 1591-1601 | IEEE Transactions on Circuits and Systems for Video Technology
Image manipulation localization via semantic-guided feature enhancement and deep multi-scale edge supervision SCIE
期刊论文 | 2025 , 639 | NEUROCOMPUTING
Abstract&Keyword Cite

Abstract :

With the widespread application of image editing software, image manipulation localization has become a focal point and promising research. Existing neural networks for image manipulation primarily rely on RGB and noise features to accurately identify tampered areas within images. However, in practical image manipulation localization tasks, noise features extracted from RGB images alone are often insufficient to effectively address tampering issues. Furthermore, existing encoder-decoder models for image manipulated localization often overlook the direct interactions between different layers during the decoding process, which hinders the effective transfer of deep semantic information to shallow features, thereby impacting the ability to accurately identify manipulated areas. To address the challenges previously identified, this paper presents a dynamically adaptive noise extraction module and achieves inter-layer information exchange in the decoder by fusing output features from different layers to extract edge information. We adaptively map RGB images to an appropriate color space using linear transformations and then extract noise features, leveraging the differences in color blocks to effectively uncover features of tampering. In addition, we integrate features across multiple decoder layers, employ deep multi-scale edge supervision to impose constraints, and introduce a dynamic ringed residual module to further enhance feature representation. Extensive experiments demonstrate that our approach achieves competitive results on diverse large-scale image datasets, exhibiting superior precision and robustness compared with most state-of-the-art methods.

Keyword :

Digital images Digital images End-to-end neural networks End-to-end neural networks Image manipulation localization Image manipulation localization Multi-scale feature fusion Multi-scale feature fusion

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Haichou , Cheng, Hang , Chen, Yun et al. Image manipulation localization via semantic-guided feature enhancement and deep multi-scale edge supervision [J]. | NEUROCOMPUTING , 2025 , 639 .
MLA Wang, Haichou et al. "Image manipulation localization via semantic-guided feature enhancement and deep multi-scale edge supervision" . | NEUROCOMPUTING 639 (2025) .
APA Wang, Haichou , Cheng, Hang , Chen, Yun , Xu, Yongliang , Wang, Meiqing . Image manipulation localization via semantic-guided feature enhancement and deep multi-scale edge supervision . | NEUROCOMPUTING , 2025 , 639 .
Export to NoteExpress RIS BibTex

Version :

Vision-language pre-training via modal interaction Scopus
期刊论文 | 2024 , 156 | Pattern Recognition
Abstract&Keyword Cite

Abstract :

Existing vision-language pre-training models typically extract region features and conduct fine-grained local alignment based on masked image/text completion or object detection methods. However, these models often design independent subtasks for different modalities, which may not adequately leverage interactions between modalities, requiring large datasets to achieve optimal performance. To address these limitations, this paper introduces a novel pre-training approach that facilitates fine-grained vision-language interaction. We propose two new subtasks — image filling and text filling — that utilize data from one modality to complete missing parts in another, enhancing the model's ability to integrate multi-modal information. A selector mechanism is also developed to minimize semantic overlap between modalities, thereby improving the efficiency and effectiveness of the pre-trained model. Our comprehensive experimental results demonstrate that our approach not only fosters better semantic associations among different modalities but also achieves state-of-the-art performance on downstream vision-language tasks with significantly smaller datasets. © 2024 Elsevier Ltd

Keyword :

Cross-modal Cross-modal Image captioning Image captioning Partial auxiliary Partial auxiliary Pre-training Pre-training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cheng, H. , Ye, H. , Zhou, X. et al. Vision-language pre-training via modal interaction [J]. | Pattern Recognition , 2024 , 156 .
MLA Cheng, H. et al. "Vision-language pre-training via modal interaction" . | Pattern Recognition 156 (2024) .
APA Cheng, H. , Ye, H. , Zhou, X. , Liu, X. , Chen, F. , Wang, M. . Vision-language pre-training via modal interaction . | Pattern Recognition , 2024 , 156 .
Export to NoteExpress RIS BibTex

Version :

Fine-grained Image Classification by Integrating Object Localization and Heterogeneous Local Interactive Learning EI
期刊论文 | 2024 , 50 (11) , 2219-2230 | Acta Automatica Sinica
Abstract&Keyword Cite Version(1)

Abstract :

Due to the existence of small inter-class differences and large intra-class variance among fine-grained images, the existing classification algorithms only focus on the extraction and representation learning of salient local features of a single image, ignoring the local heterogeneous semantic discrimination information between multiple images, difficult to pay attention to the subtle details that distinguish different categories, resulting in the lack of sufficient discrimination of the learned features. This paper proposes a progressive network to learn the information of different granularity levels of the image in a weakly supervised manner. First, attention accumulation object localization module (AAOLM) is constructed to perform semantic target integration localization on attention information from different training epochs and feature extraction stages on a single image. Second, a multi-image heterogeneous local interactive graph module (HLIGM) is designed to construct a graph network and aggregate information between the local region features of multiple images under the guidance of the category label after extracting the salient local region features of each image to enhance the discriminative power of the representation. Finally, the optimization information generated by HLIGM is fed back to the backbone by using knowledge distillation so that it can directly extract features with strong discrimination, avoiding the computational overhead of building the graph in the test phase. Through experiments on multiple data sets, it proves the effectiveness of the proposed method, which can improve the fine-grained classification accuracy. © 2024 Science Press. All rights reserved.

Keyword :

Deep neural networks Deep neural networks Graph neural networks Graph neural networks Image enhancement Image enhancement Image representation Image representation Knowledge graph Knowledge graph Self-supervised learning Self-supervised learning Semantic Segmentation Semantic Segmentation Supervised learning Supervised learning

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Chen, Quan , Chen, Fei , Wang, Yan-Gen et al. Fine-grained Image Classification by Integrating Object Localization and Heterogeneous Local Interactive Learning [J]. | Acta Automatica Sinica , 2024 , 50 (11) : 2219-2230 .
MLA Chen, Quan et al. "Fine-grained Image Classification by Integrating Object Localization and Heterogeneous Local Interactive Learning" . | Acta Automatica Sinica 50 . 11 (2024) : 2219-2230 .
APA Chen, Quan , Chen, Fei , Wang, Yan-Gen , Cheng, Hang , Wang, Mei-Qing . Fine-grained Image Classification by Integrating Object Localization and Heterogeneous Local Interactive Learning . | Acta Automatica Sinica , 2024 , 50 (11) , 2219-2230 .
Export to NoteExpress RIS BibTex

Version :

Fine-grained Image Classification by Integrating Object Localization and Heterogeneous Local Interactive Learning; [融合目标定位与异构局部交互学习的细粒度图像分类] Scopus
期刊论文 | 2024 , 50 (11) , 2219-2230 | Acta Automatica Sinica
联邦学习相似度导向的迭代聚合算法
期刊论文 | 2024 , 52 (6) , 650-658 | 福州大学学报(自然科学版)
Abstract&Keyword Cite

Abstract :

针对联邦学习中模型聚合性能和拜占庭鲁棒性问题,提出一种相似度导向的迭代聚合算法.服务器在收集客户端上传的梯度后,初始化一个相似梯度,计算每个客户端梯度与相似梯度、客户端模型与上一轮全局模型的相似度距离,根据这些距离为每个客户端分配不同的权重,以聚合出新的相似梯度.不断迭代上述计算和聚合过程,直至寻找出与所有客户端最为相似的全局梯度,将其作为本轮的聚合结果.在多个数据集以及多种神经网络结构上进行实验,结果证明该聚合算法可获得优异的模型性能.此外,通过在聚合前添加基于最小生成树的过滤器,有效增强聚合算法对拜占庭攻击的鲁棒性.

Keyword :

机器学习 机器学习 联邦学习 联邦学习 聚合更新算法 聚合更新算法 鲁棒性聚合 鲁棒性聚合

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 管林 , 王平 , 程航 et al. 联邦学习相似度导向的迭代聚合算法 [J]. | 福州大学学报(自然科学版) , 2024 , 52 (6) : 650-658 .
MLA 管林 et al. "联邦学习相似度导向的迭代聚合算法" . | 福州大学学报(自然科学版) 52 . 6 (2024) : 650-658 .
APA 管林 , 王平 , 程航 , 王美清 , 刘培豪 , 吴远翔 . 联邦学习相似度导向的迭代聚合算法 . | 福州大学学报(自然科学版) , 2024 , 52 (6) , 650-658 .
Export to NoteExpress RIS BibTex

Version :

Vision-language pre-training via modal interaction SCIE
期刊论文 | 2024 , 156 | PATTERN RECOGNITION
Abstract&Keyword Cite Version(2)

Abstract :

Existing vision-language pre-training models typically extract region features and conduct fine-grained local alignment based on masked image/text completion or object detection methods. However, these models often design independent subtasks for different modalities, which may not adequately leverage interactions between modalities, requiring large datasets to achieve optimal performance. To address these limitations, this paper introduces a novel pre-training approach that facilitates fine-grained vision-language interaction. We propose two new subtasks - image filling and text filling - that utilize data from one modality to complete missing parts in another, enhancing the model's ability to integrate multi-modal information. A selector mechanism is also developed to minimize semantic overlap between modalities, thereby improving the efficiency and effectiveness of the pre-trained model. Our comprehensive experimental results demonstrate that our approach not only fosters better semantic associations among different modalities but also achieves state-of-the-art performance on downstream vision-language tasks with significantly smaller datasets.

Keyword :

Cross-modal Cross-modal Image captioning Image captioning Partial auxiliary Partial auxiliary Pre-training Pre-training

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cheng, Hang , Ye, Hehui , Zhou, Xiaofei et al. Vision-language pre-training via modal interaction [J]. | PATTERN RECOGNITION , 2024 , 156 .
MLA Cheng, Hang et al. "Vision-language pre-training via modal interaction" . | PATTERN RECOGNITION 156 (2024) .
APA Cheng, Hang , Ye, Hehui , Zhou, Xiaofei , Liu, Ximeng , Chen, Fei , Wang, Meiqing . Vision-language pre-training via modal interaction . | PATTERN RECOGNITION , 2024 , 156 .
Export to NoteExpress RIS BibTex

Version :

Vision-language pre-training via modal interaction EI
期刊论文 | 2024 , 156 | Pattern Recognition
Vision-language pre-training via modal interaction Scopus
期刊论文 | 2024 , 156 | Pattern Recognition
Lightweight Privacy-Preserving Feature Extraction for EEG Signals Under Edge Computing SCIE
期刊论文 | 2024 , 11 (2) , 2520-2533 | IEEE INTERNET OF THINGS JOURNAL
WoS CC Cited Count: 2
Abstract&Keyword Cite Version(2)

Abstract :

The health-related Internet of Things (IoT) plays an irreplaceable role in the collection, analysis, and transmission of medical data. As a device of the health-related IoT, the electroencephalogram (EEG) has long been a powerful tool for physiological and clinical brain research, which contains a wealth of personal information. Due to its rich computational/storage resources, cloud computing is a promising solution to extract the sophisticated feature of massive EEG signals in the age of big data. However, it needs to solve both response latency and privacy leakage. To reduce latency between users and servers while ensuring data privacy, we propose a privacy-preserving feature extraction scheme, called LightPyFE, for EEG signals in the edge computing environment. In this scheme, we design an outsourced computing toolkit, which allows the users to achieve a series of secure integer and floating-point computing operations. During the implementation, LightPyFE can ensure that the users just perform the encryption and decryption operations, where all computing tasks are outsourced to edge servers for specific processing. Theoretical analysis and experimental results have demonstrated that our scheme can successfully achieve privacy-preserving feature extraction for EEG signals, and is practical yet effective.

Keyword :

Additive secret sharing Additive secret sharing edge computing edge computing electroencephalogram (EEG) signal electroencephalogram (EEG) signal Internet of Things (IoT) Internet of Things (IoT) privacy-preserving privacy-preserving

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Yan, Nazhao , Cheng, Hang , Liu, Ximeng et al. Lightweight Privacy-Preserving Feature Extraction for EEG Signals Under Edge Computing [J]. | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (2) : 2520-2533 .
MLA Yan, Nazhao et al. "Lightweight Privacy-Preserving Feature Extraction for EEG Signals Under Edge Computing" . | IEEE INTERNET OF THINGS JOURNAL 11 . 2 (2024) : 2520-2533 .
APA Yan, Nazhao , Cheng, Hang , Liu, Ximeng , Chen, Fei , Wang, Meiqing . Lightweight Privacy-Preserving Feature Extraction for EEG Signals Under Edge Computing . | IEEE INTERNET OF THINGS JOURNAL , 2024 , 11 (2) , 2520-2533 .
Export to NoteExpress RIS BibTex

Version :

Lightweight Privacy-Preserving Feature Extraction for EEG Signals under Edge Computing EI
期刊论文 | 2024 , 11 (2) , 2520-2533 | IEEE Internet of Things Journal
Lightweight Privacy-Preserving Feature Extraction for EEG Signals under Edge Computing Scopus
期刊论文 | 2023 , 11 (2) , 1-1 | IEEE Internet of Things Journal
DeepDIST: A Black-Box Anti-Collusion Framework for Secure Distribution of Deep Models SCIE
期刊论文 | 2024 , 34 (1) , 97-109 | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
Abstract&Keyword Cite Version(2)

Abstract :

Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer from being illegally copied, redistributed, or abused. In this paper, we propose DeepDIST, the first end-to-end secure DNNs distribution framework in a black-box scenario. Specifically, our framework adopts a dual-level fingerprint (FP) mechanism to provide reliable ownership verification, and proposes two equivalent transformations that can resist collusion attacks, plus a newly designed similarity loss term to improve the security of the transformations. Unlike the existing passive defense schemes that detect colluding participants, we introduce an active defense strategy, namely damaging the performance of the model after the malicious collusion. The extensive experimental results show that DeepDIST can maintain the accuracy of the host DNN after embedding fingerprint conducted for true traitor tracing, and is robust against several popular model modifications. Furthermore, the anti-collusion effect is evaluated on two typical classification tasks (10-class and 100-class), and the proposed DeepDIST can drop the prediction accuracy of the collusion model to 10% and 1% (random guess), respectively.

Keyword :

anti-collusion anti-collusion Deep neural networks Deep neural networks digital fingerprinting digital fingerprinting digital watermarking digital watermarking

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Cheng, Hang , Li, Xibin , Wang, Huaxiong et al. DeepDIST: A Black-Box Anti-Collusion Framework for Secure Distribution of Deep Models [J]. | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (1) : 97-109 .
MLA Cheng, Hang et al. "DeepDIST: A Black-Box Anti-Collusion Framework for Secure Distribution of Deep Models" . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34 . 1 (2024) : 97-109 .
APA Cheng, Hang , Li, Xibin , Wang, Huaxiong , Zhang, Xinpeng , Liu, Ximeng , Wang, Meiqing et al. DeepDIST: A Black-Box Anti-Collusion Framework for Secure Distribution of Deep Models . | IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2024 , 34 (1) , 97-109 .
Export to NoteExpress RIS BibTex

Version :

DeepDIST: A Black-box Anti-collusion Framework for Secure Distribution of Deep Models Scopus
期刊论文 | 2023 , 34 (1) , 1-1 | IEEE Transactions on Circuits and Systems for Video Technology
DeepDIST: A Black-Box Anti-Collusion Framework for Secure Distribution of Deep Models EI
期刊论文 | 2024 , 34 (1) , 97-109 | IEEE Transactions on Circuits and Systems for Video Technology
Lossless image steganography: Regard steganography as super-resolution SCIE SSCI
期刊论文 | 2024 , 61 (4) | INFORMATION PROCESSING & MANAGEMENT
Abstract&Keyword Cite Version(2)

Abstract :

Image steganography attempts to imperceptibly hide the secret image within the cover image. Most of the existing deep learning -based steganography approaches have excelled in payload capacity, visual quality, and steganographic security. However, they are difficult to losslessly reconstruct secret images from stego images with relatively large payload capacity. Recently, although some studies have introduced invertible neural networks (INNs) to achieve largecapacity image steganography, these methods still cannot reconstruct the secret image losslessly due to the existence of lost information on the output side of the concealing network. We present an INN -based framework in this paper for lossless image steganography. Specifically, we regard image steganography as an image super -resolution task that converts low -resolution cover images to high -resolution stego images while hiding secret images. The feature dimension of the generated stego image matches the total dimension of the input secret and cover images, thereby eliminating the lost information. Besides, a bijective secret projection module is designed to transform various secret images into a latent variable that follows a simple distribution, improving the imperceptibility of the secret image. Comprehensive experiments indicate that the proposed framework achieves secure hiding and lossless extraction of the secret image.

Keyword :

Covert communication Covert communication Information security Information security Invertible neural networks Invertible neural networks Lossless steganography Lossless steganography

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Wang, Tingqiang , Cheng, Hang , Liu, Ximeng et al. Lossless image steganography: Regard steganography as super-resolution [J]. | INFORMATION PROCESSING & MANAGEMENT , 2024 , 61 (4) .
MLA Wang, Tingqiang et al. "Lossless image steganography: Regard steganography as super-resolution" . | INFORMATION PROCESSING & MANAGEMENT 61 . 4 (2024) .
APA Wang, Tingqiang , Cheng, Hang , Liu, Ximeng , Xu, Yongliang , Chen, Fei , Wang, Meiqing et al. Lossless image steganography: Regard steganography as super-resolution . | INFORMATION PROCESSING & MANAGEMENT , 2024 , 61 (4) .
Export to NoteExpress RIS BibTex

Version :

Lossless image steganography: Regard steganography as super-resolution Scopus
期刊论文 | 2024 , 61 (4) | Information Processing and Management
Lossless image steganography: Regard steganography as super-resolution EI
期刊论文 | 2024 , 61 (4) | Information Processing and Management
Edge-based secure image denoising scheme supporting flexible user authorization
期刊论文 | 2024 , 18 | JOURNAL OF ALGORITHMS & COMPUTATIONAL TECHNOLOGY
Abstract&Keyword Cite Version(2)

Abstract :

Image denoising is a fundamental tool in the fields of image processing and computer vision. With the rapid development of multimedia and cloud computing, it has become popular for resource-constrained users to outsource the storage and denoising of massive images. However, it may cause privacy concerns and response delays. In this scenario, we propose an efFicient privAcy-preseRving Image deNoising schEme (FARINE) for outsourcing digital images. By introducing a key conversion mechanism, FARINE allows removing noise from a given noisy image using a non-local mean way without leaking any information about the plaintext content. Due to its low computational latency/communication cost, edge computing is considered to improve the user experience. To achieve a dynamic user set efficiently, we design a fine-grained access control mechanism to support user authorization and revocation in multi-user scenarios. Extensive experiments over several benchmark data sets show that FARINE obtains comparable performance to plaintext image denoising.

Keyword :

access control access control edge computing edge computing homomorphic encryption homomorphic encryption image denoising image denoising Privacy-preserving Privacy-preserving

Cite:

Copy from the list or Export to your reference management。

GB/T 7714 Huang, Yibing , Xu, Yongliang , Cheng, Hang et al. Edge-based secure image denoising scheme supporting flexible user authorization [J]. | JOURNAL OF ALGORITHMS & COMPUTATIONAL TECHNOLOGY , 2024 , 18 .
MLA Huang, Yibing et al. "Edge-based secure image denoising scheme supporting flexible user authorization" . | JOURNAL OF ALGORITHMS & COMPUTATIONAL TECHNOLOGY 18 (2024) .
APA Huang, Yibing , Xu, Yongliang , Cheng, Hang , Chen, Fei , Wang, Meiqing . Edge-based secure image denoising scheme supporting flexible user authorization . | JOURNAL OF ALGORITHMS & COMPUTATIONAL TECHNOLOGY , 2024 , 18 .
Export to NoteExpress RIS BibTex

Version :

Edge-based secure image denoising scheme supporting flexible user authorization EI
期刊论文 | 2024 , 18 | Journal of Algorithms and Computational Technology
Edge-based secure image denoising scheme supporting flexible user authorization Scopus
期刊论文 | 2024 , 18 | Journal of Algorithms and Computational Technology
10| 20| 50 per page
< Page ,Total 7 >

Export

Results:

Selected

to

Format:
Online/Total:125/10033020
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1