Indexed by:
Abstract:
Deep neural networks are especially vulnerable to adversarial example, which could mislead classifiers by adding imperceptible perturbations. While previous researches can effec-Tively generate adversarial example in the white-box environment, it is a challenge to produce threatening adversarial example in the black-box environment till now, where attackers only have access to obtain the predicts of models to inputs. To conquer the problem, a feasible solution is harnessing the transferability of adversarial examples and the property makes adversarial examples can successfully attack multiple models simultaneously. Therefore, this paper explores the way to enhance transfer-Ability of adversarial examples and then propose a Nadam-based iterative algorithm (NAI-FGM). NAI-FGM can achieve better convergence and effectively correct the deviation so as to boost the transferability of adversarial examples. To validate the effectiveness and transferability of adversarial examples generated by our proposed NAI-FGM, this study conducts the attacks on various single models and ensemble models on open Cifar-10 and Cifar-100. Experiment results exhibit the superiority of NAIFGM that achieves higher transferability than state-of-The-Art methods on average against black-box models. © COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.
Keyword:
Reprint 's Address:
Email:
Source :
ISSN: 0277-786X
Year: 2022
Volume: 12162
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: