Indexed by:
Abstract:
The rapidly exploding of user data, especially applications of neural networks, involves analyzing data collected from individuals, which brings convenience to life. Meanwhile, privacy leakage in the applications as a potential threat needs to be addressed urgently. However, removing private information from models is difficult once the user's sensitive data enters machine learning models, particularly neural networks. Most of the previous amnestic methods based on retraining require full access to the training set of the target model and have limited improvements in computational resources and time improvement. In this paper, we propose Scrubber, which removes sensitive data from the original model via influence estimation to produce an unlearning model that is approximately indistinguishable from the retrained model. S crubber builds on the essential concept of influence function and reformulates the influence estimation as a closed-form update of forgetting. For learned models with strictly convex loss functions, our approach theoretically guarantees the effectiveness of forgetting while empirically demonstrating forgetting performance. For models with non-convex losses, we relax strictly convex assumptions by applying a damping term that allows us to make approximate estimates with negligible errors from the original assumption. Furthermore, experiments show that S crubber only causes less than 1% and 3% accuracy drop with more than 80% forgetting rate on average for logistic regression models and convolutional neural networks. The accuracy drop is reduced by 2%-3% compared to most state-of-the-art methods.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS
ISSN: 0884-8173
Year: 2022
Issue: 11
Volume: 37
Page: 9080-9107
7 . 0
JCR@2022
5 . 0 0 0
JCR@2023
ESI Discipline: ENGINEERING;
ESI HC Threshold:66
JCR Journal Grade:1
CAS Journal Grade:2
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: