Indexed by:
Abstract:
Federated learning, a distributed machine learning paradigm, has gained a lot of attention due to its inherent privacy protection capability and heterogeneous collaboration. However, recent studies have revealed a potential privacy risk known as “gradient leakage”, where the gradients can be used to determine whether a data record with a specific property is included in another participant’s batch, thereby exposing the participant’s training data. Current privacy-enhanced federated learning methods may have drawbacks such as reduced accuracy, computational overhead, or new insecurity factors. To address this issue, a differential privacy-enhanced generative adversarial network model was proposed, which introduced an identifier into vanilla GAN, thus enabling the input data to be approached while satisfying differential privacy constraints. Then this model was applied to the federated learning framework, to improve the privacy protection capability without compromising model accuracy. The proposed method was verified through simulations under the client/server (C/S) federated learning architecture and was found to balance data privacy and practicality effectively compared with the DP-SGD method. Besides, the usability of the proposed model was theoretically analyzed under a peer-to-peer (P2P) architecture, and future research work was discussed. © 2023 National Academy of Pediatric Science and Innovation. All rights reserved.
Keyword:
Reprint 's Address:
Email:
Source :
Chinese Journal of Network and Information Security
ISSN: 2096-109X
CN: 10-1366/TP
Year: 2023
Issue: 3
Volume: 9
Page: 113-122
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: