Advanced search
Start date
Betweenand

Development of a computer vision and agricultural scenes data acquisition module for implementation in RAM robotic platform

Grant number: 15/26339-0
Support type:Regular Research Grants
Duration: May 01, 2016 - April 30, 2018
Field of knowledge:Engineering - Mechanical Engineering
Principal Investigator:Mario Luiz Tronco
Grantee:Mario Luiz Tronco
Home Institution: Escola de Engenharia de São Carlos (EESC). Universidade de São Paulo (USP). São Carlos , SP, Brazil

Abstract

In the present project, we intend to implement in the RAM Platform, all the techniques, implemented and tested by the applicant, including agricultural scenes acquisition, processing, segmentation and classification and characterization of crowns of trees using ultrasound sensors for use in autonomous navigation and generating agricultural parameters of the regions where the robot are moving. It is intended to provide the platform with a Image Data Acquisition and Ultrasound Module, using all the techniques developed and tested by the proponent. To achieve the goal proposed, approaches will be used based on J-Seg algorithm for color image segmentation (which was modified by the proponent) to work with images outdoors (irregular, with excess light, own of agricultural environment) in algorithms stereoscopic vision (based on disparity maps, modified by the proponent to measure distances to objects and definition of navigable areas) in omnidirectional vision (implemented by the proponent as an aid system navigation to define shipping lanes), in segmentation algorithms of planting imagery (implemented by the proponent to define the vehicle's tilt angle to the planting lane) and trees scanning techniques using ultrasound sensors (also implemented and tested by theproponent in agricultural environments). Through these techniques implemented on a module, and using the algorithms developed by the proponent, will be possible provide the RAM platform with a complete system for aid to the autonomous navigation (computer vision) and for mapping agricultural environments (characterization of the actors of the agricultural scene, such as trees, fruits, ground, sky, etc.), which are used both for autonomous navigation and for generation parameters of the culture visited by the robot. (AU)