Advanced search
Start date
Betweenand

Robots and neuronavigation applied to neurosurgery

Grant number: 17/01555-7
Support type:Regular Research Grants
Duration: August 01, 2018 - July 31, 2020
Field of knowledge:Engineering - Mechanical Engineering
Principal Investigator:Glauco Augusto de Paula Caurin
Grantee:Glauco Augusto de Paula Caurin
Home Institution: Escola de Engenharia de São Carlos (EESC). Universidade de São Paulo (USP). São Carlos , SP, Brazil
Assoc. researchers:Adriano Almeida Gonçalves Siqueira ; Antonio Adilton Oliveira Carneiro ; Carlo Rondinoni ; Helio Rubens Machado ; Oswaldo Baffa Filho

Abstract

There is a great expectation for short-term advances in neurosurgical area, more specifically, it is possible to anticipate more precise surgical procedures, more efficient preoperative planning, access to eloquent brain areas that until recently were considered inoperable and as a consequence risks reduction. All this positive potential is based on the integration of two technologies:*The introduction of collaborative robots - cobots - incorporating control concepts which are inherently safer, more friendly, agile and flexible when compared to conventional robots.*Advances in image based neuronavigation system. Neuronavigation are computer based systems that provide localization, 3D intraoperative guidance and navigation. They are used in neurosurgery to track and locate surgical tools with respect to the spatial anatomy of the patient.From a clinical point of view, this project will benefit children with epilepsy and cortical dysplasia, assisting neurosurgeons in the challenge of reaching patients brain areas reducing risks.From the engineering point of view, we propose the implementation and analysis of an automatic modulation system dynamic set of behavior formed by the robot, surgeon and patient in physical contact. The stability and performance of this set depends on the characteristics of this dynamic system. Our approach combines computer simulation and experiments in a semi-structured surgical scenario. The visual-motor integration will use learning by demonstration and artificial intelligence methods. (AU)