Home>Results

  • Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

[期刊论文]

Boosting the Transferability of Adversarial Attacks with Frequency-aware Perturbation

Share
Edit Delete 报错

author:

Wang, Y. (Wang, Y..) [1] | Wu, Y. (Wu, Y..) [2] | Wu, S. (Wu, S..) [3] | Unfold

Indexed by:

Scopus

Abstract:

Deep neural networks (DNNs) are vulnerable to adversarial examples, with transfer attacks in black-box scenarios posing a severe real-world threat. Adversarial perturbation is often globally manipulated image disturbances crafted in the spatial domain, leading to perceptible noise due to overfitting to the source model. Both the human visual system (HVS) and DNNs (endeavoring to mimic HVS behavior) exhibit unequal sensitivity to different frequency components of an image. In this paper, we intend to exploit this characteristic to create frequency-aware perturbation. Concentrating adversarial perturbations on components within images that contribute more significantly to model inference to enhance the performance of transfer attacks. We devise a systematic approach to select and constrain adversarial optimization in a subset of frequency components that are more critical to model prediction. Specifically, we measure the contributions of each individual frequency component and devise a scheme to concentrate adversarial optimization on these important frequency components, thereby creating frequency-aware perturbations. Our approach confines perturbations within model-agnostic critical frequency components, significantly reducing overfitting to the source model. Our approach can be seamlessly integrated with existing state-of-the-art attacks. Experiments demonstrate that while concentrating perturbation within selected frequency components yields a smaller perturbation magnitude overall, our approach does not sacrifice adversarial effectiveness. Conversely, our frequency-aware perturbation manifests superior performance, boosting imperceptibility, transferability, and evasion against various defenses. IEEE

Keyword:

Adversarial Attack Adversarial Example Deep Neural Networks Discrete cosine transforms Frequency-domain analysis Image reconstruction Optimization Perturbation methods Predictive models Sensitivity Transferability

Community:

  • [ 1 ] [Wang Y.]School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, China
  • [ 2 ] [Wu Y.]China Academy of Information and Communications Technology and the Key Laboratory of Mobile Application Innovation and Governance Technology, Ministry of Industry and Information Technology, Beijing, China
  • [ 3 ] [Wu S.]School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, China
  • [ 4 ] [Liu X.]College of Computer and Data Science, Fuzhou University, Fuzhou, China
  • [ 5 ] [Zhou W.]Faculty of Data Science, City University of Macau, Macau, China
  • [ 6 ] [Zhu L.]School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, China
  • [ 7 ] [Zhang C.]School of Cyberspace Science and Technology, Beijing Institute of Technology, Beijing, China

Reprint 's Address:

Show more details

Related Article:

Source :

IEEE Transactions on Information Forensics and Security

ISSN: 1556-6013

Year: 2024

Volume: 19

Page: 1-1

6 . 3 0 0

JCR@2023

Cited Count:

WoS CC Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:57/10154059
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1