• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Wang, Yajie (Wang, Yajie.) [1] | Wu, Yi (Wu, Yi.) [2] | Wu, Shangbo (Wu, Shangbo.) [3] | Liu, Ximeng (Liu, Ximeng.) [4] | Zhou, Wanlei (Zhou, Wanlei.) [5] | Zhu, Liehuang (Zhu, Liehuang.) [6] | Zhang, Chuan (Zhang, Chuan.) [7]

Indexed by:

EI

Abstract:

Deep neural networks (DNNs) are vulnerable to adversarial examples, with transfer attacks in black-box scenarios posing a severe real-world threat. Adversarial perturbation is often globally manipulated image disturbances crafted in the spatial domain, leading to perceptible noise due to overfitting to the source model. Both the human visual system (HVS) and DNNs (endeavoring to mimic HVS behavior) exhibit unequal sensitivity to different frequency components of an image. In this paper, we intend to exploit this characteristic to create frequency-aware perturbation. Concentrating adversarial perturbations on components within images that contribute more significantly to model inference to enhance the performance of transfer attacks. We devise a systematic approach to select and constrain adversarial optimization in a subset of frequency components that are more critical to model prediction. Specifically, we measure the contributions of each individual frequency component and devise a scheme to concentrate adversarial optimization on these important frequency components, thereby creating frequency-aware perturbations. Our approach confines perturbations within model-agnostic critical frequency components, significantly reducing overfitting to the source model. Our approach can be seamlessly integrated with existing state-of-the-art attacks. Experiments demonstrate that while concentrating perturbation within selected frequency components yields a smaller perturbation magnitude overall, our approach does not sacrifice adversarial effectiveness. Conversely, our frequency-aware perturbation manifests superior performance, boosting imperceptibility, transferability, and evasion against various defenses. © 2005-2012 IEEE.

Keyword:

Computer vision Cosine transforms Deep neural networks Discrete cosine transforms Discrete Fourier transforms Frequency domain analysis Image coding Image enhancement Image reconstruction Perturbation techniques

Community:

  • [ 1 ] [Wang, Yajie]Beijing Institute of Technology, School of Cyberspace Science and Technology, Beijing; 100081, China
  • [ 2 ] [Wu, Yi]Academy of Information and Communications Technology, Beijing; 100191, China
  • [ 3 ] [Wu, Yi]Ministry of Industry and Information Technology, Key Laboratory of Mobile Application Innovation and Governance Technology, Beijing; 100191, China
  • [ 4 ] [Wu, Shangbo]Beijing Institute of Technology, School of Cyberspace Science and Technology, Beijing; 100081, China
  • [ 5 ] [Liu, Ximeng]Fuzhou University, College of Computer and Data Science, Fuzhou; 350116, China
  • [ 6 ] [Zhou, Wanlei]City University of Macau, Faculty of Data Science, China
  • [ 7 ] [Zhu, Liehuang]Beijing Institute of Technology, School of Cyberspace Science and Technology, Beijing; 100081, China
  • [ 8 ] [Zhang, Chuan]Beijing Institute of Technology, School of Cyberspace Science and Technology, Beijing; 100081, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

IEEE Transactions on Information Forensics and Security

ISSN: 1556-6013

Year: 2024

Volume: 19

Page: 6293-6304

6 . 3 0 0

JCR@2023

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:101/10036554
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1