Indexed by:
Abstract:
Federated learning (FL) has achieved state-of-the-art performance in distributed learning tasks with privacy requirements. However, it has been discovered that FL is vulnerable to adversarial attacks. The typical gradient inversion attacks primarily focus on attempting to obtain the client's private input in a white-box manner, where the adversary is assumed to be either the client or the server. However, if both the clients and the server are honest and fully trusted, is the FL secure? In this paper, we propose a novel method called External Gradient Inversion Attack (EGIA) in the grey-box settings. Specifically, we concentrate on the point that public-shared gradients in FL are always transmitted through the intermediary nodes, which has been widely ignored. On this basis, we demonstrate that an external adversary can reconstruct the private input using gradients even if both the clients and the server are honest and fully trusted. We also provide a comprehensive theoretical analysis of the black-box attack scenario in which the adversary has only the gradients. We perform extensive experiments on multiple real-world datasets to test the effectiveness of EGIA. The outcomes of our experiments validate that the EGIA method is highly effective.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
ISSN: 1556-6013
Year: 2023
Volume: 18
Page: 4984-4995
6 . 3
JCR@2023
6 . 3 0 0
JCR@2023
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:32
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 7
SCOPUS Cited Count: 9
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: