Indexed by:
Abstract:
The variety of multimedia big data has promoted emerging applications of multiple sensorial media (mulsemedia) types, in which haptic information attracts increasing attentions. Until now, the interaction between haptic signal and conventional audio-visual signals have not been fully investigated. In this work, we make an exploration on the cross-modal interactivity in task-driven scenarios. We first explore the correlation between visual attention and haptic control in three designed tasks: random-trajectory, fixed-trajectory and obstacle-avoidance. Then, we propose a visual-haptic interaction model that estimates kinesthetic position of haptic control with the information of gaze only. By incorporating a Long Short-Term Memory (LSTM) neural network, the proposed model provides effective prediction in the scenarios of fixed-trajectory and obstacle-avoidance, with its performance superior to other selected machine learning-based models. To further examine our model, we execute it in a haptic control task using visual guidance. Implementation results show a high task achievement rate. © 2019 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
Proceedings - 2019 IEEE 5th International Conference on Multimedia Big Data, BigMM 2019
Year: 2019
Page: 111-117
Language: English
Cited Count:
SCOPUS Cited Count: 5
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: