Indexed by:
Abstract:
We address the problem of localizing waste objects from a color image and an optional depth image, which is a key perception component for robotic interaction with such objects. Specifically, our method integrates the intensity and depth information at multiple levels of spatial granularity. Firstly, a scene-level deep network produces an initial coarse segmentation, based on which we select a few potential object regions to zoom in and perform fine segmentation. The results of the above steps are further integrated into a densely connected conditional random field that learns to respect the appearance, depth, and spatial affinities with pixel-level accuracy. In addition, we create a new RGBD waste object segmentation dataset, MJU-Waste, that is made public to facilitate future research in this area. The efficacy of our method is validated on both MJU-Waste and the Trash Annotation in Context (TACO) dataset.
Keyword:
Reprint 's Address:
Version:
Source :
SENSORS
ISSN: 1424-8220
Year: 2020
Issue: 14
Volume: 20
3 . 5 7 6
JCR@2020
3 . 4 0 0
JCR@2023
ESI Discipline: CHEMISTRY;
ESI HC Threshold:160
JCR Journal Grade:1
CAS Journal Grade:2
Cited Count:
WoS CC Cited Count: 42
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 0
Affiliated Colleges: