Indexed by:
Abstract:
Deep learning inference on edge devices is susceptible to security threats, particularly Fault Injection Attacks (FIAs), which are easily executed and pose a significant risk to the inference. These attacks could potentially lead to alterations to the memory of the edge device or errors in the execution of instructions. Specifically, time-intensive convolution computation is considerably vulnerable in deep learning inference at the edge. To detect and defend attacks against deep learning inference on heterogeneous edge devices, we propose an efficient hardware-based solution for verifiable model inference named DarkneTV. It leverages an asynchronous mechanism to conduct the hash checking of convolution weights and the verification of convolution computations within the Trusted Execution Environment (TEE) of the Central Processing Unit (CPU) when the integrated Graphics Processing Unit (GPU) runs model inference. It protects the integrity of convolution weights and the correctness of inference results, and effectively detects abnormal weight modifications and incorrect inference results regarding neural operators. Extensive experimental results show that DarkneTV identifies tiny FIAs against convolution weights and computation with over 99.03% accuracy but less extra time overhead. The asynchronous mechanism significantly improves the performance of verifiable inference. Typically, the speedups of the GPU-accelerated verifiable inference on the Hikey 960 achieve 8.50x-11.31x compared with the CPU-only mode. IEEE
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Sensors Journal
ISSN: 1530-437X
Year: 2024
Issue: 17
Volume: 24
Page: 1-1
4 . 3 0 0
JCR@2023
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: