Research Grants 16/19947-6 - Redes neurais (computação), Expressão facial - BV FAPESP
Advanced search
Start date
Betweenand

Development of recurrent Convolutional Neural Network architectures for facial expression recognition

Grant number: 16/19947-6
Support Opportunities:Regular Research Grants
Start date: February 01, 2017
End date: February 28, 2019
Field of knowledge:Physical Sciences and Mathematics - Computer Science
Principal Investigator:Gerberth Adín Ramírez Rivera
Grantee:Gerberth Adín Ramírez Rivera
Host Institution: Instituto de Computação (IC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil
Associated researchers:Anderson de Rezende Rocha ; Hélio Pedrini ; Ricardo da Silva Torres

Abstract

The number of smart devices and personal electronic equipments that we own and interact with is increasing every day. Given that facial expressions and corporal language are the most emotion-related signals humans emit, it is natural to incorporate human expression recognition capabilities into smart devices to improve their functionalities. Furthermore, human expression recognition techniques facilitate the interaction with people in an efficient manner. In this context, facial expression recognition is now an attractive solution to advance on inclusion and universal access issues, such as ease-of-use of electronic equipments, eliminating barriers to be part of and to benefit from all the opportunities offered by modern societies. A first step in that direction is the recognition and classification of human facial expressions as basic constituents to infer more complex human emotional states. Therefore, we need robust automatic face analysis algorithms to be included in our smart devices. To automatically recognize facial expressions we need a robust description of each expression, or in general, a robust face or image descriptor. Furthermore, that description should be general enough to accommodate the different ways in which each expression can be performed, while maintaining the discrimination among the different types of expressions. Moreover, there are other challenges that a robust descriptor should overcome. For example, for daily-use devices, such as smartphones, the environment in which pictures are captured varies immensely, e.g., we may have non-constant illumination, noise, rotation and background changes, among others. Thus, the challenge of creating an image descriptor is to overcome the changes in environmental conditions as well as inter- and intra-class variations. Thus, this project aims to develop and analyze several architectures of deep neural networks to create robust facial descriptors that are discriminative between classes, yet enclose the intra-class variations due to imaging conditions, appearance changes, noise, and other factors. And simultaneously, use temporal information from the videos to enhance the classification, thus further reducing error and enhancing the recognition. Our general objective is to develop new network architectures, based on Convolutional Neural Networks and Recurrent Neural Networks, to describe and recognize temporal facial expressions robustly. This goal includes the design and analysis of network architectures that use different techniques for learning, description and extraction of the information, and use of internal layers to provide memory to the network, as well as their implementation, and evaluation in standard benchmark databases of human expressions accepted by the community. (AU)

Articles published in Agência FAPESP Newsletter about the research grant:
More itemsLess items
Articles published in other media outlets ( ):
More itemsLess items
VEICULO: TITULO (DATA)
VEICULO: TITULO (DATA)

Scientific publications (7)
(References retrieved automatically from Web of Science and SciELO through information on FAPESP grants and their corresponding numbers as mentioned in the publications by the authors)
ZHDANOV, PAVEL; KHAN, ADIL; RIVERA, ADIN RAMIREZ; KHATTAK, ASAD MASOOD; IEEE. Improving Human Action Recognition through Hierarchical Neural Network Classifiers. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), v. N/A, p. 7-pg., . (16/19947-6)
FIGUEROA, JHOSIMAR ARIAS; RIVERA, ADIN RAMIREZ; IEEE. Learning to Cluster with Auxiliary Tasks: A Semi-Supervised Approach. 2017 30TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), v. N/A, p. 8-pg., . (16/19947-6)
QUISPE, RODOLFO; TTITO, DARWIN; RIVERA, ADIN; PEDRINI, HELIO. Multi-Stream Networks and Ground Truth Generation for Crowd Counting. INTERNATIONAL JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING SYSTEMS, v. 11, n. 1, p. 33-41, . (14/12236-1, 16/19947-6)
RAMIREZ CORNEJO, JADISHA YARIF; PEDRINI, HELIO; BATTIATO, S; GALLO, G; SCHETTINI, R; STANCO, F. Emotion Recognition Based on Occluded Facial Expressions. IMAGE ANALYSIS AND PROCESSING,(ICIAP 2017), PT I, v. 10484, p. 11-pg., . (14/12236-1, 16/19947-6)
BIN IQBAL, MD TAUHID; RYU, BYUNGYONG; RIVERA, ADIN RAMIREZ; MAKHMUDKHUJAEV, FARKHOD; CHAE, OKSAM; BAE, SUNG-HO. Facial Expression Recognition with Active Local Shape Pattern and Learned-Size Block Representations. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, v. 13, n. 3, p. 15-pg., . (16/19947-6)
SANTANDER, MIGUEL RODRIGUEZ; ALBARRACIN, JUAN HERNANDEZ; RIVERA, ADIN RAMIREZ. On the pitfalls of learning with limited data: A facial expression recognition case study. EXPERT SYSTEMS WITH APPLICATIONS, v. 183, . (19/07257-3, 16/19947-6, 17/16144-2)
TTITO, DARWIN; QUISPE, RODOLFO; RIVERA, ADIN RAMFREZ; PEDRINI, HELIO; RIMACDRLJE, S; ZAGAR, D; GALIC, I; MARTINOVIC, G; VRANJES, D; HABIJAN, M. Where are the People? A Multi-Stream Convolutional Neural Network for Crowd Counting via Density Map from Complex Images. PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON SYSTEMS, SIGNALS AND IMAGE PROCESSING (IWSSIP 2019), v. N/A, p. 6-pg., . (16/19947-6, 14/12236-1)