Indexed by:
Abstract:
The distortion introduced by different projection (e.g. ERP, CUBE, etc) pose a great challenge to the depth estimation task of 360° images. We propose a novel approach, named OlaNet, to solve the self-supervised 360° depth estimation problem. Our method is motivated by two aspects: 1) the content of 360° imagery can be better learned by effective fields-of-views technology, i.e., atrous spatial pyramid pooling combined with projection coordinate prior; 2) L1-norm can learn more robust and sparse representation than L2-norm in smooth regularization of depth estimation. By considering these two evidence, we develop an end-to-end network that adopts the distortion-aware view synthesis, atrous spatial pyramid pooling and L1-norm regularized smooth term, to achieve the 360° depth estimation effectively and robustly. Extensive experiments on the 3D60 dataset demonstrate the superior performance of our OlaNet approach in comparison with the SOTA methods. © 2021 IEEE
Keyword:
Reprint 's Address:
Email:
Source :
ISSN: 1945-7871
Year: 2021
Language: English
Cited Count:
SCOPUS Cited Count: 7
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 2
Affiliated Colleges: