Inspection indoor environments is increasingly important, given its usefulness in finding simple faults or even those that pose an imminent danger to the people involved. Still aiming at safety and optimization, robotic platforms are used in this task, enabling inspection in unhealthy and toxic environments to the quality of human life. This requires simultaneous location and mapping (SLAM) and object detection techniques. Thus, this project proposes developing a robotic inspection system using RGB-D cameras, generating maps of the inspected environments and signaling the objects of interest in the study. Mapping will be carried out using open-access ROS packages, both in two and three dimensions, depending on feasibility, using the Point Cloud data from the camera. As for object detection, the camera images will be processed by a convolutional neural network called YOLO, providing the detection reliability and the image coordinates, in which, together with the depth information, the spatial position of the object detection is found, and the marking on the map is performed. The project results will be evaluated through metrics related to object detection accuracy, using the average AP accuracy and the average mAP average accuracy and metrics related to filling and marking objects on the map. The inspections will be carried out in the department's internal environments, detecting common everyday objects at first, such as doors, emergency exits, and fire extinguishers.
News published in Agência FAPESP Newsletter about the scholarship: