Indexed by:
Abstract:
Applications based on deep learning are prevailing in many real-world scenarios. However, due to the interpretability of deep learning and the emergence of adversarial examples attacks, their reliability has been put into doubt. As for Natural Language Processing, adversarial samples that can be semantically similar to the original ones can also be generated so that NLP classification can be fooled without human observers' attention. In this paper, we give an overview of the existing method in implementing adversarial examples attacks. Firstly, we introduce the implementation of these methods and the damage they impose on the security, integrity, and robustness of NLP systems. Secondly, we discuss problems like the limited attention on the logic of defense, the curse of dimensionality, and the difficulty of implementing white-box adversaries. Finally, to present our opinion on ML security's structure, our discussion upon the standardization of both attack and defense is also included. © 2021 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
Year: 2021
Page: 707-711
Language: English
Cited Count:
SCOPUS Cited Count: 6
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 10
Affiliated Colleges: