Advanced search
Start date
(Reference retrieved automatically from Web of Science through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

On deep learning techniques to boost monocular depth estimation for autonomous navigation

Full text
Mendes, Raul de Queiroz [1] ; Ribeiro, Eduardo Godinho [1] ; Rosa, Nicolas dos Santos [1] ; Grassi, Jr., Valdir [1]
Total Authors: 4
[1] Univ Sao Paulo, Sao Carlos Sch Engn, Dept Comp & Elect Engn, Sao Paulo - Brazil
Total Affiliations: 1
Document type: Journal article
Web of Science Citations: 0

Inferring the depth of images is a fundamental inverse problem within the field of Computer Vision since depth information is obtained through 2D images, which can be generated from infinite possibilities of observed real scenes. Benefiting from the progress of Convolutional Neural Networks (CNNs) to explore structural features and spatial image information, Single Image Depth Estimation (SIDE) is often highlighted in scopes of scientific and technological innovation, as this concept provides advantages related to its low implementation cost and robustness to environmental conditions. In the context of autonomous vehicles, state-of-the-art CNNs optimize the SIDE task by producing high-quality depth maps, which are essential during the autonomous navigation process in different locations. However, such networks are usually supervised by sparse and noisy depth data, from Light Detection and Ranging (LiDAR) laser scans, and are carried out at high computational cost, requiring high-performance Graphic Processing Units (GPUs). Therefore, we propose a new lightweight and fast supervised CNN architecture combined with novel feature extraction models which are designed for real-world autonomous navigation. We also introduce an efficient surface normals module, jointly with a simple geometric 2.5D loss function, to solve SIDE problems. We also innovate by incorporating multiple Deep Learning techniques, such as the use of densification algorithms and additional semantic, surface normals and depth information to train our framework. The method introduced in this work focuses on robotic applications in indoor and outdoor environments and its results are evaluated on the competitive and publicly available NYU Depth V2 and KITTI Depth datasets. (C) 2020 Elsevier B.V. All rights reserved. (AU)

FAPESP's process: 14/50851-0 - INCT 2014: National Institute of Science and Technology for Cooperative Autonomous Systems Applied in Security and Environment
Grantee:Marco Henrique Terra
Support type: Research Projects - Thematic Grants