Advanced search
Start date
Betweenand
(Reference retrieved automatically from Web of Science through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

Real-time deep learning approach to visual servo control and grasp detection for autonomous robotic manipulation

Full text
Author(s):
Ribeiro, Eduardo Godinho [1] ; Mendes, Raul de Queiroz [1] ; Grassi Jr, Valdir
Total Authors: 3
Affiliation:
[1] Univ Sao Paulo, Sao Carlos Sch Engn, Dept Elect & Comp Engn, Sao Paulo - Brazil
Total Affiliations: 1
Document type: Journal article
Source: ROBOTICS AND AUTONOMOUS SYSTEMS; v. 139, MAY 2021.
Web of Science Citations: 0
Abstract

Robots still cannot perform everyday manipulation tasks, such as grasping, with the same dexterity as humans do. In order to explore the potential of supervised deep learning for robotic grasping in unstructured and dynamic environments, this work addresses the visual perception phase involved in the task. This phase involves the processing of visual data to obtain the location of the object to be grasped, its pose and the points at which the robot's grippers must make contact to ensure a stable grasp. For this, the Cornell Grasping Dataset (CGD) is used to train a Convolutional Neural Network (CNN) that is able to consider these three stages simultaneously. In other words, having an image of the robot's workspace, containing a certain object, the network predicts a grasp rectangle that symbolizes the position, orientation and opening of the robot's parallel grippers the instant before its closing. In addition to this network, which runs in real-time, another network is designed, so that it is possible to deal with situations in which the object moves in the environment. Therefore, the second convolutional network is trained to perform a visual servo control, ensuring that the object remains in the robot's field of view. This network predicts the proportional values of the linear and angular velocities that the camera must have to ensure the object is in the image processed by the grasp network. The dataset used for training was automatically generated by a Kinova Gen3 robotic manipulator with seven Degrees of Freedom (DoF). The robot is also used to evaluate the applicability in real-time and obtain practical results from the designed algorithms. Moreover, the offline results obtained through test sets are also analyzed and discussed regarding their efficiency and processing speed. The developed controller is able to achieve a millimeter accuracy in the final position considering a target object seen for the first time. To the best of our knowledge, we have not found in the literature other works that achieve such precision with a controller learned from scratch. Thus, this work presents a new system for autonomous robotic manipulation, with the ability to generalize to different objects and with high processing speed, which allows its application in real robotic systems. (C) 2021 Elsevier B.V. All rights reserved. (AU)

FAPESP's process: 14/50851-0 - INCT 2014: National Institute of Science and Technology for Cooperative Autonomous Systems Applied in Security and Environment
Grantee:Marco Henrique Terra
Support Opportunities: Research Projects - Thematic Grants