Indexed by:
Abstract:
Service robots have already been able to implement explicit and simple tasks assigned by human beings, but they still lack the ability to act like humans who can analyze the assigned task and ask questions to acquire supplementary information to resolve ambiguities from the environment. Inspired by this point, we fuse verbal language and pointing gesture information to enable a robot to execute a vague task, such as 'bring me the book'. In this paper, we propose a system integrating human-robot dialogue, mapping and action execution planning in unknown 3D environments. We consider grounding natural language commands to a sequence of low-level instructions that can be executed by the robot. To express the targets location which is pointed to by the user in a global fixed frame, we use a SLAM approach to build the environment map. Experimental results demonstrate that a humanoid robot NAO can acquire the skill based on our proposed approach in unknown environment. © 2016 IEEE.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2016
Page: 1918-1923
Language: English
Cited Count:
SCOPUS Cited Count: 4
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: