Advanced search
Start date

Interactive Learning of Visual Dictionaries Applied to Image Classification

Grant number: 17/03940-5
Support type:Scholarships in Brazil - Doctorate
Effective date (Start): August 01, 2017
Effective date (End): February 29, 2020
Field of knowledge:Physical Sciences and Mathematics - Computer Science - Computing Methodologies and Techniques
Cooperation agreement: Coordination of Improvement of Higher Education Personnel (CAPES)
Principal researcher:Alexandre Xavier Falcão
Grantee:César Christian Castelo Fernández
Home Institution: Instituto de Computação (IC). Universidade Estadual de Campinas (UNICAMP). Campinas , SP, Brazil
Associated research grant:14/12236-1 - AnImaLS: Annotation of Images in Large Scale: what can machines and specialists learn from interaction?, AP.TEM


Deep learning techniques have become very popular in the design of image descriptors and pattern classifiers as a single pipeline. A well-known drawback in those techniques, however, is the need of large training sets with pre-annotated samples. Unfortunately, in several applications of the Sciences and Engineering, it is difficult to count on specialists to pre-annotate large training sets, especially when the annotation requires interactive image segmentation (which is the case when samples are image objects). Even the manual specification of regions of interest around objects might be difficult when the number of required samples is high, or when the image is volumetric. This makes important the investigation of techniques that can learn from a small number of labeled samples. Bag of Visual Words (BoVW) is an example of success in unsupervised feature learning. By detecting points and local image features from a set of unlabeled training images, a dictionary of visual words can be constructed with the most representative features and an effective image descriptor can be obtained by matching the local image features of a given new image and the visual words in the dictionary. By training a supervised classifier based on those image descriptors, it is possible to assign labels to new images. In this project, we aim at revisiting BoVW for the design of image descriptors from a small number of labeled samples. Given the success of BoVW as an unsupervised feature learning technique, we believe that the knowledge of the samples' labels can considerably improve its performance. Therefore, we propose to investigate ways of exploiting the label information in the design of visual dictionaries. This allows us to increase the number of labeled samples by active learning, as the pattern classifier based on the BoVW image descriptor improves along learning iterations. We believe that the performance of the resulting classifier will be satisfactory, much before the labeled training set can be considered large, which would be a tremendous saving of time and effort in image annotation for the specialists. We are also interested in evaluating the performance of the resulting BoVW-based image classifier with the image classification based on deep learning, using the same increased training sets. The idea is to discover a minimum number of labeled samples required for effective deep learning. Of course, those issues can lead to different answers depending on the application. We intend to focus on image datasets from a major project associated with the diagnosis of intestinal parasites. (AU)

News published in Agência FAPESP Newsletter about the scholarship:
Articles published in other media outlets (0 total):
More itemsLess items

Scientific publications
(References retrieved automatically from Web of Science and SciELO through information on FAPESP grants and their corresponding numbers as mentioned in the publications by the authors)
CULQUICONDOR, ALDO; BALDASSIN, ALEXANDRO; CASTELO-FERNANDEZ, CESAR; DE CARVALHO, JOAO P. L.; PAPA, JOAO PAULO. An efficient parallel implementation for training supervised optimum-path forest classifiers. Neurocomputing, v. 393, p. 259-268, . (14/16250-9, 14/12236-1, 13/07375-0, 16/19403-6, 17/03940-5)

Please report errors in scientific publications list by writing to: