Advanced search
Start date
Betweenand

Object detection and offloading orchestration for robotics

Grant number: 25/19129-0
Support Opportunities:Scholarships abroad - Research Internship - Scientific Initiation
Start date: December 01, 2025
End date: February 28, 2026
Field of knowledge:Physical Sciences and Mathematics - Computer Science - Computer Systems
Principal Investigator:Fabio Luciano Verdi
Grantee:João Vitor Naves Mesa
Supervisor: Chrysa Papagianni
Host Institution: Centro de Ciências em Gestão e Tecnologia (CCGT). Universidade Federal de São Carlos (UFSCAR). Campus de Sorocaba. Sorocaba , SP, Brazil
Institution abroad: University of Amsterdam (UvA), Netherlands  
Associated to the scholarship:25/01185-1 - LLM-enabled control with dynamic compute offloading of AI modules, BP.IC

Abstract

The growing gap between the demands of modern applications and the limited resources of end devices has made task offloading a critical area of research. While cloud computing initially addressed this challenge, its inherent limitations-such as high latency and network overhead-are unsuitable for real-time AI tasks. Edge computing emerged as a solution, moving computation closer to end devices. However, optimizing offloading decisions remains complex, as it requires balancing multiple objectives, including latency, energy efficiency, and resource availability, while adapting to dynamic network conditions and user mobility. To tackle this challenge, researchers have explored various approaches, from mathematical modeling to reinforcement learning, but there is still a need for robust frameworks tailored to specific workloads. In this project, we systematically study the offloading problem for computer-vision workloads, specifically real-time object detection using YOLO, within a robotic control loop. We propose an orchestration framework that investigates and compares different decision-making methodologies, including Supervised Learning (SL) and Reinforcement Learning (RL) approaches, to understand their effectiveness in making intelligent offloading decisions. The system extends the robot's control loop to seamlessly manage workload placement through various decision-making strategies, ensuring application performance and reliability. The growing gap between the demands of modern applications and the limited resources of end devices has made task offloading a critical area of research. While cloud computing initially addressed this challenge, its inherent limitations-such as high latency and network overhead-are unsuitable for real-time AI tasks. Edge computing emerged as a solution, moving computation closer to end devices. However, optimizing offloading decisions remains complex, as it requires balancing multiple objectives, including latency, energy efficiency, and resource availability, while adapting to dynamic network conditions and user mobility. To tackle this challenge, researchers have explored various approaches, from mathematical modeling to reinforcement learning, but there is still a need for robust frameworks tailored to specific workloads. In this project, we systematically study the offloading problem for computer-vision workloads, specifically real-time object detection using YOLO, within a robotic control loop. We propose an orchestration framework that investigates and compares different decision-making methodologies, including Supervised Learning (SL) and Reinforcement Learning (RL) approaches, to understand their effectiveness in making intelligent offloading decisions. The system extends the robot's control loop to seamlessly manage workload placement through various decision-making strategies, ensuring application performance and reliability. Through a comprehensive analysis of offloading decision strategies, this research contributes to new paradigms for distributed inference across domains such as robotics and extended reality.

News published in Agência FAPESP Newsletter about the scholarship:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)