Indexed by:
Abstract:
Federated learning emerged to solve the privacy leakage problem of traditional centralized machine learning methods. Although traditional federated learning updates the global model by updating the gradient, an attacker may still infer the model update through backward inference, which may lead to privacy leakage problems. In order to enhance the security of federated learning, we propose a solution to this challenge by presenting a multi-key Cheon-Kim-Kim-Song (CKKS) scheme for privacy protection in federated learning. Our approach can enable each participant to use local datasets for federated learning while maintaining data security and model accuracy, and we also introduce FedCMK, a more efficient and secure federated learning framework. FedCMK uses an improved client selection strategy to improve the training speed of the framework, redesigns the key aggregation process according to the improved client selection strategy, and proposes a scheme vMK-CKKS, to ensure the security of the framework within a certain threshold. In particular, the vMK-CKKS scheme adds a secret verification mechanism to prevent participants from malicious attacks through false information. The experiments show that our proposed vMK-CKKS schemes significantly improve security and efficiency compared with the previous encryption schemes. FedCMK reduces training time by 21% on average while guaranteeing model accuracy, and it provides robustness by allowing participants to join or leave during the process. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
Keyword:
Reprint 's Address:
Version:
Source :
ISSN: 0302-9743
Year: 2024
Volume: 14509 LNCS
Page: 253-271
Language: English
0 . 4 0 2
JCR@2005
Affiliated Colleges: