Indexed by:
Abstract:
Shared gradients are extensively utilized for safeguarding the privacy of training data. However, an increasing body of research is uncovering that the gradients or model parameters transmitted in distributed systems may also leak users' private information. A majority of these studies are predicated on intercepting the data transmitted by individual users to derive their private information. A smaller portion of research can extrapolate a multitude of users' private data through the average gradients passed by the server. These investigations have analyzed information leakage in image and text domains, yet have not explored the leakage issues inherent within recommendation systems operating in distributed environments. Furthermore, the impact of batch size and model parameters on the degree of leakage has not been sufficiently analyzed. This paper proposes that within recommendation systems, it is feasible to infer the encapsulated user privacy data by receiving average gradients or model parameters provided by the server. Additionally, the paper evaluates the extent to which various parameters within the system impact the leakage and presents corresponding defensive measures while assessing their efficacy.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE ACCESS
ISSN: 2169-3536
Year: 2024
Volume: 12
Page: 173037-173046
3 . 4 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 4
Affiliated Colleges: