• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Zhang, Chun-Yang (Zhang, Chun-Yang.) [1] | Xiao, Yong-Yi (Xiao, Yong-Yi.) [2] | Lin, Jin-Cheng (Lin, Jin-Cheng.) [3] | Chen, C. L. Philip (Chen, C. L. Philip.) [4] | Liu, Wenxi (Liu, Wenxi.) [5] | Tong, Yu-Hong (Tong, Yu-Hong.) [6]

Indexed by:

EI

Abstract:

Data representation learning is one of the most important problems in machine learning. Unsupervised representation learning becomes meritorious as it has no necessity of label information with observed data. Due to the highly time-consuming learning of deep-learning models, there are many machine-learning models directly adapting well-trained deep models that are obtained in a supervised and end-to-end manner as feature abstractors to distinct problems. However, it is obvious that different machine-learning tasks require disparate representation of original input data. Taking human action recognition as an example, it is well known that human actions in a video sequence are 3-D signals containing both visual appearance and motion dynamics of humans and objects. Therefore, the data representation approaches with the capabilities to capture both spatial and temporal correlations in videos are meaningful. Most of the existing human motion recognition models build classifiers based on deep-learning structures such as deep convolutional networks. These models require a large quantity of training videos with annotations. Meanwhile, these supervised models cannot recognize samples from the distinct dataset without retraining. In this article, we propose a new 3-D deconvolutional network (3DDN) for representation learning of high-dimensional video data, in which the high-level features are obtained through the optimization approach. The proposed 3DDN decomposes the video frames into spatiotemporal features under a sparse constraint in an unsupervised way. In addition, it also can be regarded as a building block to develop deep architectures by stacking. The high-level representation of input sequential data can be used in multiple downstream machine-learning tasks, we evaluate the proposed 3DDN and its deep models in human action recognition. The experimental results from three datasets: 1) KTH data; 2) HMDB-51; and 3) UCF-101, demonstrate that the proposed 3DDN is an alternative approach to feedforward convolutional neural networks (CNNs), that attains comparable results. © 2013 IEEE.

Keyword:

Convolution Deep learning Feedforward neural networks Motion estimation Unsupervised learning Video recording

Community:

  • [ 1 ] [Zhang, Chun-Yang]School of Mathematics and Computer Science, Fuzhou University, Fuzhou, China
  • [ 2 ] [Xiao, Yong-Yi]School of Mathematics and Computer Science, Fuzhou University, Fuzhou, China
  • [ 3 ] [Lin, Jin-Cheng]School of Mathematics and Computer Science, Fuzhou University, Fuzhou, China
  • [ 4 ] [Chen, C. L. Philip]School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
  • [ 5 ] [Liu, Wenxi]School of Mathematics and Computer Science, Fuzhou University, Fuzhou, China
  • [ 6 ] [Tong, Yu-Hong]School of Mathematics and Computer Science, Fuzhou University, Fuzhou, China

Reprint 's Address:

Email:

Show more details

Related Keywords:

Related Article:

Source :

IEEE Transactions on Cybernetics

ISSN: 2168-2267

Year: 2022

Issue: 1

Volume: 52

Page: 398-410

1 1 . 8

JCR@2022

9 . 4 0 0

JCR@2023

ESI HC Threshold:61

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count:

SCOPUS Cited Count:

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 1

Affiliated Colleges:

Online/Total:92/10042220
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1