Indexed by:
Abstract:
Haptic data compression has gradually become a key issue for emerging real-time haptic communications in Tactile Internet (TI). However, it is challenging to achieve a tradeoff between high perceptual quality and compression ratio in haptic data compression scheme. Inspired by the perspective of embodied AI, we propose a cross-modal haptic compression scheme for haptic communications to improve the perception quality on TI devices in this paper. Since multimodal fusion is routinely employed to improve the ability of system in cognition, we assume that haptic codec is guided by visual semantics to optimize parameter settings in the coding process. We first design a multi-dimensional tactile feature fusion network (MTFFN) relying on multi-head attention mechanism. The MTFFN extracts the multi-dimensional features from the material surface and maps them to infer the coding parameters. Secondly, we provide second-order difference and linear interpolation to establish an criterion for the determination of optimal codec parameters, which are customized by the material categories so as to give high robustness. Finally, the simulation results reveal that our compression scheme can efficiently make a personalized codec procedure for different materials, obtaining more than 17% improvement in terms of compression ratio with high perceptual quality at the same time. © 2025 IEEE.
Keyword:
Reprint 's Address:
Email:
Source :
IEEE Transactions on Multimedia
ISSN: 1520-9210
Year: 2025
8 . 4 0 0
JCR@2023
CAS Journal Grade:1
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: