Indexed by:
Abstract:
Federated learning (FL) is a novel machine learning framework in which machine learning models are built jointly by multiple parties. We investigate the privacy preservation of XGBoost, a gradient boosting decision tree (GBDT) model, in the context of FL. While recent work relies on cryptographic schemes to preserve the privacy of model gradients, these methods are computationally expensive. In this paper, we propose an adaptive gradient privacy-preserving algorithm based on differential privacy (DP), which is more computationally efficient. Our algorithm perturbs individual data by computing an adaptive gradient mean per sample and adding appropriate noise during XGBoost training, while still making the perturbed gradient data available. The training accuracy and communication efficiency of the model are guaranteed under the premise of satisfying the definition of DP. We show the proposed algorithm outperforms other DP methods in terms of prediction accuracy and approaches the lossless federated XGBoost model while being more efficient. © 2023 ACM.
Keyword:
Reprint 's Address:
Email:
Source :
Year: 2023
Page: 277-282
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: