• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Chen, Jialu (Chen, Jialu.) [1] | Zhou, Jun (Zhou, Jun.) [2] | Cao, Zhenfu (Cao, Zhenfu.) [3] | Vasilakos, Athanasios (Vasilakos, Athanasios.) [4] | Dong, Xiaolei (Dong, Xiaolei.) [5] | Choo, Kim-Kwang Raymond (Choo, Kim-Kwang Raymond.) [6]

Indexed by:

EI Scopus SCIE

Abstract:

Machine learning, particularly the neural network (NN), is extensively exploited in dizzying applications. In order to reduce the burden of computing for resource-constrained clients, a large number of historical private datasets are required to be outsourced to the semi-trusted or malicious cloud for model training and evaluation. To achieve privacy preservation, most of the existing work either exploited the technique of public key fully homomorphic encryption (FHE) resulting in considerable computational cost and ciphertext expansion, or secure multiparty computation (SMC) requiring multiple rounds of interactions between user and cloud. To address these issues, in this article, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized NNs (DiNNs) is proposed. First, we put forward an efficient single key fully homomorphic data encapsulation mechanism (SFH-DEM) without exploiting public key FHE. Based on SFH-DEM, a series of atomic calculations over the encrypted domain, including multivariate polynomial, nonlinear activation function, gradient function, and maximum operations are devised as building blocks. Furthermore, a lightweight privacy-preserving model training and evaluation scheme LPTE for DiNNs is proposed, which can also be extended to convolutional NN. Finally, we give the formal security proofs for dataset privacy, model training privacy, and model evaluation privacy under the semi-honest environment and implement the experiment on real dataset MNIST for recognizing handwritten numbers in DiNN to demonstrate the high efficiency and accuracy of our proposed LPTE.

Keyword:

Computational modeling Data privacy Discretized neural networks (NNs) efficiency Neural networks privacy-preserving Public key secure outsourced computation Training

Community:

  • [ 1 ] [Chen, Jialu]East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
  • [ 2 ] [Zhou, Jun]East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
  • [ 3 ] [Cao, Zhenfu]East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
  • [ 4 ] [Dong, Xiaolei]East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China
  • [ 5 ] [Vasilakos, Athanasios]Lulea Univ Technol, Dept Comp Sci Elect & Space Engn, S-93187 Skelleftea, Sweden
  • [ 6 ] [Vasilakos, Athanasios]Fuzhou Univ, Dept Comp Sci & Technol, Fuzhou 350108, Peoples R China
  • [ 7 ] [Choo, Kim-Kwang Raymond]Univ Texas San Antonio, Dept Informat Syst & Cyber Secur, San Antonio, TX 78249 USA

Reprint 's Address:

  • [Zhou, Jun]East China Normal Univ, Shanghai Key Lab Trustworthy Comp, Shanghai 200062, Peoples R China

Show more details

Related Keywords:

Related Article:

Source :

IEEE INTERNET OF THINGS JOURNAL

ISSN: 2327-4662

Year: 2020

Issue: 4

Volume: 7

Page: 2663-2678

9 . 4 7 1

JCR@2020

8 . 2 0 0

JCR@2023

ESI Discipline: COMPUTER SCIENCE;

ESI HC Threshold:149

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count: 14

SCOPUS Cited Count: 15

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Online/Total:59/10022998
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1