Indexed by:
Abstract:
Nowadays, machine learning models, especially neural networks, have became prevalent in many real-world applications. These models are trained based on a one-way trip from user data: as long as users contribute their data, there is no way to withdraw. To this end, machine unlearning becomes a popular research topic, which allows the model trainer to unlearn unexpected data from a trained machine learning model. In this article, we propose the first uniform metric called forgetting rate to measure the effectiveness of a machine unlearning method. It is based on the concept of membership inference and describes the transformation rate of the eliminated data from 'memorized' to 'unknown' after conducting unlearning. We also propose a novel unlearning method called Forsaken. It is superior to previous work in either utility or efficiency (when achieving the same forgetting rate). We benchmark Forsaken with eight standard datasets to evaluate its performance. The experimental results show that it can achieve more than 90% forgetting rate on average and only causeless than 5% accuracy loss. © 2004-2012 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Dependable and Secure Computing
ISSN: 1545-5971
Year: 2023
Issue: 4
Volume: 20
Page: 3194-3207
7 . 0
JCR@2023
7 . 0 0 0
JCR@2023
ESI HC Threshold:32
JCR Journal Grade:1
CAS Journal Grade:1
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: