Indexed by:
Abstract:
Adversarial training is believed to be the most robust and effective defense method against adversarial attacks. Gradient-based adversarial attack methods are generally adopted to evaluate the effectiveness of adversarial training. However, in this paper, by diving into the existing adversarial attack literature, we find that adversarial examples generated by these attack methods tend to be less imperceptible, which may lead to an inaccurate estimation for the effectiveness of the adversarial training. The existing adversarial attacks mostly adopt gradient-based optimization methods and such optimization methods have difficulties in searching the most effective adversarial examples (i.e., the global extreme points). On the contrast, in this work, we propose a novel Non-Gradient Attack (NGA) to overcome the above-mentioned problem. Extensive experiments show that NGA significantly outperforms the state-of-the-art adversarial attacks on Attack Success Rate (ASR) by 2% ∼ 7%. © 2022 IEEE
Keyword:
Reprint 's Address:
Email:
Source :
ISSN: 1520-6149
Year: 2022
Volume: 2022-May
Page: 3293-3297
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: