This research project is to develop a system that, from facial images obtained in real time, be able to map the face and extract data that can be used to interpret a facial expression as an emotion in a collection pre-defined set of basic emotions considered. For this, given the location of a face (from a system of detection and tracking facial existing), the system will determine regions of interest so that essential characteristics of the definition of a facial expression (in this project: position of the eyes, eyebrows, mouth and nose) can be extracted and processed. The detection of these four elements in the region face in the picture, will be performed by the method of Viola-Jones (Boosted Cascade of Simple Features) and implemented using version 2.0 of the public computer vision library developed by Intel, OpenCV. For a precise location of these elements within a region, will be a study and application of algorithms for finding contours and pattern recognition in images.The mapping system and extraction of facial features is limited to work initially with frontal or facial images with small displacements (face turned slightly to the side). The objective of this system is to be a basis for implementation of a classifier of emotions applied to interactions between humans and robots.To define the accuracy of classification, quantity and complexity of data reviewed, we'll define the basic emotions that could be classified from the data generated in the system statement.
News published in Agência FAPESP Newsletter about the scholarship: