| Grant number: | 24/23068-4 |
| Support Opportunities: | Regular Research Grants |
| Start date: | June 01, 2025 |
| End date: | May 31, 2027 |
| Field of knowledge: | Physical Sciences and Mathematics - Computer Science - Computer Systems |
| Agreement: | Technische Hochschule Ingolstadt (THI) |
| Principal Investigator: | José Mario de Martino |
| Grantee: | José Mario de Martino |
| Principal researcher abroad: | ALESSANDRO ZIMMER |
| Institution abroad: | Technical University of Ingolstadt. , Germany |
| Host Institution: | Faculdade de Engenharia Elétrica e de Computação (FEEC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil |
| City of the host institution: | Campinas |
| Associated researchers: | Boddu Raviteja ; Hélio Pedrini ; Joed Lopes da Silva ; José Carlos Ferreira ; Munir Georges |
| Associated research grant: | 24/00914-7 - Center for Science for Development - Assistive Technology and Accessibility in Libras, AP.CCD |
Abstract
The project aims to strengthen the collaboration between the THI research group led by Prof. Zimmer and Prof. Georges and the Science Center for Development - Assistive Technology and Accessibility in Brazilian Sign Language (CCD-TAAL) of the Universidade Estadual de Campinas led by Prof. De Martino, focusing on new studies in in-vehicle monitoring and improving communication for deaf individuals. The project focuses on developing an in-vehicle gesture and Brazilian and German Sign Language recognition and translation approaches. The project's scope also includes formulating User Experience (UX) concepts for car head units and compatible mobile devices, such as smartphones and tablets, based on vehicle occupants' behavior and interaction with a 3D avatar. The aim is to increase vehicle safety and comfort. The project's strategy is to explore multi-modal sensing and deep neural networks and develop functional proof-of-concepts with associated user interfaces to test and evaluate the developments. Specific goals are defining and recording datasets, recognizing gestures and sign language using deep neural networks, creating 3D virtual human models, and user evaluations using functional proofs-of-concept. The methodology leverages the expertise of both THI and UNICAMP research teams, with activities coordinated through regular meetings and workshops. Key activities include creating datasets for human activity, gesture and sign language recognition and translation, developing deep neural models, and generating realistic 3D avatars for communication. (AU)
| Articles published in Agência FAPESP Newsletter about the research grant: |
| More itemsLess items |
| TITULO |
| Articles published in other media outlets ( ): |
| More itemsLess items |
| VEICULO: TITULO (DATA) |
| VEICULO: TITULO (DATA) |