Indexed by:
Abstract:
Federated learning (FL) is an emerging paradigm for privacy-preserving machine learning, in which multiple clients collaborate to generate a global model through training individual models with local data. However, FL is vulnerable to model poisoning attacks (MPAs) as malicious clients are able to destroy the global model by modifying local models. Although numerous model poisoning defense methods are extensively studied, they remain vulnerable to newly proposed optimized MPAs and are constrained by the necessity to presume a certain proportion of malicious clients. To this end, in this paper, we propose MODEL, a model poisoning defense framework for FL through truth discovery (TD). A distinctive aspect of MODEL is its ability to effectively prevent both optimized and byzantine MPAs. Furthermore, it requires no presupposed threshold for different settings of malicious clients (e.g., less than 33% or no more than 50%). Specifically, a TD-based metric and a clustering-based filtering mechanism are proposed to evaluate local models and avoid presupposing a threshold. Furthermore, MODEL is effective for non-independent and identically distributed (non-IID) training data. In addition, inspired by game theory, we incorporate a truthful and fair incentive mechanism in MODEL to encourage active client participation while mitigating the potential desire for attacks from malicious clients. Extensively comparative experiments demonstrate that MODEL effectively safeguards against optimized MPAs and outperforms the state-of-the-art.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
ISSN: 1556-6013
Year: 2024
Volume: 19
Page: 8747-8759
6 . 3 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: