Indexed by:
Abstract:
Deep learning is recently regarded as the closest artificial intelligence model to human brain. It is about learning multiple levels of representation and abstraction that help to make sense of data such as images, sound, and text. One deep model often consists of a hierarchical architecture that has the capability to model super non-linear and stochastic problems. Restricted Boltzmann Machine (RBM) is the main constructing block of current deep networks, as most of deep architectures are built with it. Based on MapReduce framework and Hadoop distributed file system, this paper proposes a distributed algorithm for training the RBM model. Its implementation and performance are evaluated on Big Data platform-Hadoop. The main contribution of the new learning algorithm is that it solves the scalability problem that limits the development of deep learning. The intelligence growing process of human brain requires learning from Big Data. The distributed learning mechanism for RBM makes it possible to abstract sophisticated and informative features from Big Data to achieve high-level intelligence. The evaluations of the proposed learning algorithm are carried out on image inpainting and classification problems based on the BAS dataset and MNIST hand-written digits dataset. (C) 2016 Elsevier B.V. All rights reserved.
Keyword:
Reprint 's Address:
Version:
Source :
NEUROCOMPUTING
ISSN: 0925-2312
Year: 2016
Volume: 198
Page: 4-11
3 . 3 1 7
JCR@2016
5 . 5 0 0
JCR@2023
ESI Discipline: COMPUTER SCIENCE;
ESI HC Threshold:175
JCR Journal Grade:1
CAS Journal Grade:3
Cited Count:
WoS CC Cited Count: 25
SCOPUS Cited Count: 30
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: