• Complex
  • Title
  • Keyword
  • Abstract
  • Scholars
  • Journal
  • ISSN
  • Conference
成果搜索

author:

Ding, Jianchuan (Ding, Jianchuan.) [1] | Gao, Lingping (Gao, Lingping.) [2] | Liu, Wenxi (Liu, Wenxi.) [3] (Scholars:刘文犀) | Piao, Haiyin (Piao, Haiyin.) [4] | Pan, Jia (Pan, Jia.) [5] | Du, Zhenjun (Du, Zhenjun.) [6] | Yang, Xin (Yang, Xin.) [7] | Yin, Baocai (Yin, Baocai.) [8]

Indexed by:

EI Scopus SCIE

Abstract:

Deep reinforcement learning has achieved great success in laser-based collision avoidance works because the laser can sense accurate depth information without too much redundant data, which can maintain the robustness of the algorithm when it is migrated from the simulation environment to the real world. However, high-cost laser devices are not only difficult to deploy for a large scale of robots but also demonstrate unsatisfactory robustness towards the complex obstacles, including irregular obstacles, e.g., tables, chairs, and shelves, as well as complex ground and special materials. In this paper, we propose a novel monocular camera-based complex obstacle avoidance framework. Particularly, we innovatively transform the captured RGB images to pseudo-laser measurements for efficient deep reinforcement learning. Compared to the traditional laser measurement captured at a certain height that only contains one-dimensional distance information away from the neighboring obstacles, our proposed pseudo-laser measurement fuses the depth and semantic information of the captured RGB image, which makes our method effective for complex obstacles. We also design a feature extraction guidance module to weight the input pseudo-laser measurement, and the agent has more reasonable attention for the current state, which is conducive to improving the accuracy and efficiency of the obstacle avoidance policy. Besides, we adaptively add the synthesized noise to the laser measurement during the training stage to decrease the sim-to-real gap and increase the robustness of our model in the real environment. Finally, the experimental results show that our framework achieves state-of-the-art performance in several virtual and real-world scenarios.

Keyword:

Cameras Collision avoidance Deep reinforcement learning Measurement by laser beam obstacle avoidance robot navigation Robots Robot sensing systems robot vision Semantics Sensors

Community:

  • [ 1 ] [Ding, Jianchuan]Dalian Univ Technol, Sch Comp Sci, Dalian 116024, Peoples R China
  • [ 2 ] [Gao, Lingping]Dalian Univ Technol, Sch Comp Sci, Dalian 116024, Peoples R China
  • [ 3 ] [Ding, Jianchuan]Hebei Univ Water Resources & Elect Engn, Sch Comp Sci & Informat Engn, Cangzhou 061016, Peoples R China
  • [ 4 ] [Gao, Lingping]Alibaba Grp, Hangzhou 310000, Peoples R China
  • [ 5 ] [Liu, Wenxi]Fuzhou Univ, Coll Math & Comp Sci, Fuzhou 350108, Peoples R China
  • [ 6 ] [Piao, Haiyin]Northwestern Polytech Univ, Sch Elect & Informat, Xian 710072, Peoples R China
  • [ 7 ] [Pan, Jia]Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
  • [ 8 ] [Du, Zhenjun]SIASUN Robot & Automat Co Ltd, Shenyang 110168, Peoples R China
  • [ 9 ] [Yang, Xin]Dalian Univ Technol, Dalian 116024, Peoples R China
  • [ 10 ] [Yin, Baocai]Dalian Univ Technol, Dalian 116024, Peoples R China

Reprint 's Address:

  • [Yang, Xin]Dalian Univ Technol, Dalian 116024, Peoples R China;;

Show more details

Version:

Related Keywords:

Source :

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY

ISSN: 1051-8215

Year: 2023

Issue: 2

Volume: 33

Page: 756-770

8 . 3

JCR@2023

8 . 3 0 0

JCR@2023

ESI Discipline: ENGINEERING;

ESI HC Threshold:35

JCR Journal Grade:1

CAS Journal Grade:1

Cited Count:

WoS CC Cited Count: 6

SCOPUS Cited Count: 7

ESI Highly Cited Papers on the List: 0 Unfold All

WanFang Cited Count:

Chinese Cited Count:

30 Days PV: 2

Online/Total:275/9556145
Address:FZU Library(No.2 Xuyuan Road, Fuzhou, Fujian, PRC Post Code:350116) Contact Us:0591-22865326
Copyright:FZU Library Technical Support:Beijing Aegean Software Co., Ltd. 闽ICP备05005463号-1