Indexed by:
Abstract:
Federated Learning (FL) eliminates data silos that hinder digital transformation while training a shared global model collaboratively. However, training a global model in the context of FL has been highly susceptible to heterogeneity and privacy concerns due to discrepancies in data distribution, which may lead to potential data leakage from uploading model updates. Despite intensive research on above-identical issues, existing approaches fail to balance robustness and privacy in FL. Furthermore, limiting model updates or iterative clustering tends to fall into local optimum problems in heterogeneous (Non-IID) scenarios. In this work, to address these deficiencies, we provide lightweight privacy-preserving cross-cluster federated learning (PrivCrFL) on Non-IID data, to trade off robustness and privacy in Non-IID settings. Our PrivCrFL exploits secure one-shot hierarchical clustering with cross-cluster shifting for optimizing sub-group convergences. Furthermore, we introduce intra-cluster learning and inter-cluster learning with separate aggregation for mutual learning between each group. We perform extensive experimental evaluations on three benchmark datasets and compare our results with state-of-the-art studies. The findings indicate that PrivCrFL offers a notable performance enhancement, with improvements ranging from 0.26%~uparrow to 1.35%~uparrow across different Non-IID settings. PrivCrFL also demonstrates a superior communication compression ratio in secure aggregation, outperforming current state-of-the-art works by 10.59%. © 2005-2012 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Information Forensics and Security
ISSN: 1556-6013
Year: 2024
Volume: 19
Page: 7404-7419
6 . 3 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count: 2
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: