Indexed by:
Abstract:
Federated Learning (FL) is a distributed paradigm enabling clients to train a global model collaboratively while protecting client privacy. During the FL training process, the statistical heterogeneity between different clients can compromise the overall performance of the global model and its generalization ability on each client, making it difficult for the training process to converge. This paper proposes an efficient clustered FL (cFL) method called FedCC, which aims to cluster clients based on their inference results on a public dataset. As inference results may leak client data distribution, we use Locality Sensitive Hashing (LSH) to transform inference results into Inference Hash Codes (IHC), which are irreversible but can be used for similarity calculations. The server compares the similarity of IHCs between clients and implements dynamic clustering using the DBSCAN algorithm. FedCC also provides an elegant method to quickly select the appropriate cluster model for clients without downloading all cluster models. We evaluated FedCC on four commonly used datasets and compared them against seven baselines. Experimental results show that FedCC achieves faster convergence than other baselines while achieving an accuracy 1.66% higher than the state-of-the-art baseline. Finally, we further validated the robustness of FedCC against Byzantine attacks, where malicious clients upload negative gradients to reduce model accuracy and prevent convergence.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
INFORMATION SECURITY AND CRYPTOLOGY, INSCRYPT 2023, PT II
ISSN: 0302-9743
Year: 2024
Volume: 14527
Page: 73-90
0 . 4 0 2
JCR@2005
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: