• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Zhang, Anguo (Zhang, Anguo.) [1] | Li, Xiumin (Li, Xiumin.) [2] | Gao, Yueming (Gao, Yueming.) [3] | Niu, Yuzhen (Niu, Yuzhen.) [4]

Indexed by:

EI

Abstract:

The biologically discovered intrinsic plasticity (IP) learning rule, which changes the intrinsic excitability of an individual neuron by adaptively turning the firing threshold, has been shown to be crucial for efficient information processing. However, this learning rule needs extra time for updating operations at each step, causing extra energy consumption and reducing the computational efficiency. The event-driven or spike-based coding strategy of spiking neural networks (SNNs), i.e., neurons will only be active if driven by continuous spiking trains, employs all-or-none pulses (spikes) to transmit information, contributing to sparseness in neuron activations. In this article, we propose two event-driven IP learning rules, namely, input-driven and self-driven IP, based on basic IP learning. Input-driven means that IP updating occurs only when the neuron receives spiking inputs from its presynaptic neurons, whereas self-driven means that IP updating only occurs when the neuron generates a spike. A spiking convolutional neural network (SCNN) is developed based on the ANN2SNN conversion method, i.e., converting a well-trained rate-based artificial neural network to an SNN via directly mapping the connection weights. By comparing the computational performance of SCNNs with different IP rules on the recognition of MNIST, FashionMNIST, Cifar10, and SVHN datasets, we demonstrate that the two event-based IP rules can remarkably reduce IP updating operations, contributing to sparse computations and accelerating the recognition process. This work may give insights into the modeling of brain-inspired SNNs for low-power applications. © 2012 IEEE.

Keyword:

Computational efficiency Convolution Convolutional neural networks Energy efficiency Energy utilization Firing (of materials) Internet protocols Neurons

Community:

  • [ 1 ] [Zhang, Anguo]Key Laboratory Of Medical Instrumentation And Pharmaceutical Technology Of Fujian Province, Fuzhou; 350116, China
  • [ 2 ] [Zhang, Anguo]Research Institute Of Ruijie, Ruijie Networks Company Ltd, Fuzhou; 350002, China
  • [ 3 ] [Li, Xiumin]Chongqing University, College Of Automation, Chongqing; 400030, China
  • [ 4 ] [Gao, Yueming]Key Laboratory Of Medical Instrumentation And Pharmaceutical Technology Of Fujian Province, Fuzhou; 350116, China
  • [ 5 ] [Gao, Yueming]Fuzhou University, College Of Physics And Information Engineering, Fuzhou; 350108, China
  • [ 6 ] [Niu, Yuzhen]College Of Mathematics And Computer Science, Fuzhou University, Fujian Key Laboratory Of Network Computing And Intelligent Information Processing, Fujian; 350108, China
  • [ 7 ] [Niu, Yuzhen]Ministry Of Education, Key Laboratory Of Spatial Data Mining And Information Sharing, Fujian; 350108, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

IEEE Transactions on Neural Networks and Learning Systems

ISSN: 2162-237X

Year: 2022

Issue: 5

Volume: 33

Page: 1986-1995

1 0 . 4

JCR@2022

1 0 . 2 0 0

JCR@2023

ESI HC Threshold:61

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count: 26

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 2

Affiliated Colleges:

Online/Total:101/9851768
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1