Indexed by:
Abstract:
Videos have become widespread due to the ease of obtaining and going share via social platform. Event recognition in video has gained more and more attention in computer vision. This is a hard task that requires extracting meaningful spatiotemporal features for event recognition, mainly due to complexity and diversity of video events. Many proposed networks learn spatial features and temporal separately. In this paper, we propose a simple, yet effective approach for spatio-temporal features' learning: using deep spatial-temporal neural networks based on convolution 3D. The architecture is shown in Fig. 1. The network can capture the motion information in multiple adjacent frames and appearance information simultaneously. Most of the famous 2D CNN networks follow a regular pattern: the former of convolution kernel size is bigger and the number of channel in latter layers increase, such as alexnet. So we choose the way that contacting two continuous convolutional layers to instead of a convolutional layer which its kernel size is bigger through synthetical consideration. We carry out experiments on KTH dataset, and evaluate them using 5-fold method. And this paper introduce two simple method of increasing the amount of training data and improving the performance on both. Experimental result shows that our model achieve an accuracy of 95.33% on KTH dataset, we further demonstrate that our model is a general and effective architecture through compared to other algorithms, including hand-crafted algorithms and other CNNs. © 2018 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
8th Annual IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems, CYBER 2018
Year: 2019
Page: 45-50
Language: English
Cited Count:
WoS CC Cited Count: 0
SCOPUS Cited Count: 1
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: