Indexed by:
Abstract:
Federated learning has become prevalent in medical diagnosis due to its effectiveness in training a federated model among multiple health institutions (i.e., Data Islands (DIs)). However, increasingly massive DI-level poisoning attacks have shed light on a vulnerability in federated learning, which inject poisoned data into certain DIs to corrupt the availability of the federated model. Previous works on federated learning have been inadequate in ensuring the privacy of DIs and the availability of the final federated model. In this article, we design a secure federated learning mechanism with multiple keys to prevent DI-level poisoning attacks for medical diagnosis, called SFAP. Concretely, SFAP provides privacy-preserving random forest-based federated learning by using the multi-key secure computation, which guarantees the confidentiality of DI-related information. Meanwhile, a secure defense strategy over encrypted locally-submitted models is proposed to resist DI-level poisoning attacks. Finally, our formal security analysis and empirical tests on a public cloud platform demonstrate the security and efficiency of SFAP as well as its capability of resisting DI-level poisoning attacks.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON SERVICES COMPUTING
ISSN: 1939-1374
Year: 2022
Issue: 6
Volume: 15
Page: 3429-3442
8 . 1
JCR@2022
5 . 5 0 0
JCR@2023
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:61
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
WoS CC Cited Count: 16
SCOPUS Cited Count: 20
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: