Indexed by:
Abstract:
With the pursuit of higher accuracy, the convolutional neural network has become deeper. Thus, the hardware overhead occupied have increased. Batch Normalization (BN) operation is an indispensable part of the network, and its occupied hardware resources cannot be ignored. In this paper, aiming to reduce hardware overhead caused by BN operation, we combine BN operation and ReLU activation function and propose the BNReLU operation without retraining the network. The experiment results show that compared with traditional BN+ReLU layer, hardware overhead of BNReLU operation is reduced by about 50%. With 32-bit float point input for each layer, the hardware overhead of BRAM, DSP, FF, and LUT for convolutional layer in the FPGA is reduced by 17.16%, 23.08%, 17.64%, and 16.05%. © 2019 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
ISSN: 2162-7541
Year: 2019
Language: English
Cited Count:
SCOPUS Cited Count: 8
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: