Indexed by:
Abstract:
Centralized learning now faces data mapping and security constraints that make it difficult to carry out. Federated learning with a distributed learning architecture has changed this situation. By restricting the training process to participants’ local, federated learning addresses the model training needs of multiple data sources while better protecting data privacy. However, in real-world application scenarios, federated learning faces the need to achieve fairness in addition to privacy protection. In practice, it could happen that some federated learning participants with specific motives may short join the training process to obtain the current global model with only limited contribution to the federated learning whole, resulting in unfairness to the participants who previously participated in federated learning. We propose the FedACC framework with a server-initiated global model accuracy control method to address this issue. Besides measuring the accumulative contributions of newly joined participants and providing participants with a model with an accuracy that matches their contributions, the FedACC still guarantees the validity of participant gradients based on the accuracy-decayed model. Under the FedACC framework, users do not have access to the full version of the current global model early in their training participation. However, they must produce a certain amount of contributions before seeing the full-accuracy model. We introduce an additional differential privacy mechanism to protect clients’ privacy further. Experiments demonstrate that the FedACC could obtain about 10% to 20% accuracy gain compared to the state-of-art methods while balancing the fairness, performance, and security of federated learning. IEEE
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Internet of Things Journal
ISSN: 2327-4662
Year: 2023
Issue: 12
Volume: 10
Page: 1-1
8 . 2
JCR@2023
8 . 2 0 0
JCR@2023
ESI HC Threshold:32
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
SCOPUS Cited Count: 8
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 6
Affiliated Colleges: