Indexed by:
Abstract:
Federated Learning (FL) enables multiple clients to collaborate in training neural network models while retaining their private data locally. Despite its advantages, FL is vulnerable to backdoor attacks due to its distributed nature. Attackers introduce triggers into the global model, causing it to make specified predictions on inputs containing these triggers. Existing detection or clustering defense methods based on distance and similarity have significant limitations. Methods based on clipping and adding noise can only slightly mitigate the impact of backdoor attacks. To achieve a better defense, we introduce FedCL, a backdoor defense framework that accurately detects backdoor models by assessing the uncertainty of model predictions. Additionally, it employs dynamic clipping to limit model updates' impact, successfully mitigating backdoor attacks without compromising the global model's accuracy. The experiments indicate that FedCL moderately improves by 0.01%↑ ∼ 85.78%↑ than the state-of-the-art (SOTA) defense methods, especially in the CIFAR-10 task trained with more complex networks. © 2024 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
ISSN: 1945-7871
Year: 2024
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: