Indexed by:
Abstract:
A-mode ultrasound has the advantages of high resolution, easy calculation and low cost in predicting dexterous gestures. In order to accelerate the popularization of A-mode ultrasound gesture recognition technology, we designed a human-machine interface that can interact with the user in real-time. Data processing includes Gaussian filtering, feature extraction and PCA dimensionality reduction. The NB, LDA and SVM algorithms were selected to train machine learning models. The whole process was written in C++ to classify gestures in real-time. This paper conducts offline and real-time experiments based on HMI-A (Human-machine interface based on A-mode ultrasound), including ten subjects and ten common gestures. To demonstrate the effectiveness of HMI-A and avoid accidental interference, the offline experiment collected ten rounds of gestures for each subject for ten-fold cross-validation. The results show that the offline recognition accuracy is 96.92% +/- 1.92%. The real-time experiment was evaluated by four online performance metrics: action selection time, action completion time, action completion rate and real-time recognition accuracy. The results show that the action completion rate is 96.0% +/- 3.6%, and the real-time recognition accuracy is 83.8% +/- 6.9%. This study verifies the great potential of wearable A-mode ultrasound technology, and provides a wider range of application scenarios for gesture recognition.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
ISSN: 1534-4320
Year: 2022
Volume: 30
Page: 2623-2629
4 . 9
JCR@2022
4 . 8 0 0
JCR@2023
ESI Discipline: ENGINEERING;
ESI HC Threshold:66
JCR Journal Grade:1
CAS Journal Grade:2
Cited Count:
WoS CC Cited Count: 19
SCOPUS Cited Count: 21
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: