Indexed by:
Abstract:
This paper presents a pruned sparse extreme learning machine (PS-ELM) algorithm, which can generate a compact single-hidden-layer neural network (SLNN) by automatically pruning the number of hidden nodes while keep high accuracy. In this PS-ELM algorithm, input connections between input and hidden layers are base vectors, which can sparsely map the input features into hidden layer by using gradient projection (GP) algorithm; The output weights between hidden and output layers can map the sparse features into class labels. This PS-ELM algorithm initializes the SLNN given superfluous number of hidden nodes. The subsequent training process consists of four iterative steps. The first one is to update base vectors by using Lagrange dual optimization. The second one is to prune zero base vectors which are considered to be insignificant. The third one is sparse coding which re-encodes the training samples given remained base vectors. The fourth one is to update output weight matrix by using ELM-like algorithm. This iterative process stops once the number of hidden nodes at current time is equal to the number at the last time. This PS-ELM algorithm can improve the sparsity and distinction of hidden layer feature representations. Meanwhile, the pruning is independent on hand-designed threshold. Experimental results on benchmark datasets have shown that the PS-ELM algorithm can automatically achieve a reasonable compact network structure while keep comparable or much higher accuracy in classification. © 2016 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2016
Volume: 2016-October
Page: 2596-2602
Language: English
Cited Count:
SCOPUS Cited Count: 3
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 6
Affiliated Colleges: