Busca avançada
Ano de início
Entree


Improving Supervised Superpixel-Based Codebook Representations by Local Convolutional Features

Texto completo
Autor(es):
Castelo-Fernandez, Cesar ; Falcao, Alexandre X. ; DeGiacomo, G ; Catala, A ; Dilkina, B ; Milano, M ; Barro, S ; Bugarin, A ; Lang, J
Número total de Autores: 9
Tipo de documento: Artigo Científico
Fonte: ECAI 2020: 24TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE; v. 325, p. 8-pg., 2020-01-01.
Resumo

Deep learning techniques are the current state-of-the-art in image classification techniques, but they have the drawback of requiring a large number of labelled training images. In this context, Visual Dictionaries (Codebooks or Bag of Visual Words - BoVW) are a very interesting alternative to Deep Learning models whenever we have a low number of labelled training images. However, the methods usually extract interest points from each image independently, which then leads to unrelated interest points among the classes and they build one visual dictionary in an unsupervised fashion. In this work, we present: 1) the use of class-specific superpixel segmentation to define interest points that are common for each class, 2) the use of Convolutional Neural Networks (CNNs) with random weights to cope with the absence of labeled data and extract more representative local features, and 3) the construction of specialized visual dictionaries for each class. We conduct experiments to show that our method can outperform a CNN trained from a small set of labelled images and can be equivalent to a CNN with pre-trained features. Also, we show that our method is better than other traditional BoVW approaches. (AU)

Processo FAPESP: 14/12236-1 - AnImaLS: Anotação de Imagem em Larga Escala: o que máquinas e especialistas podem aprender interagindo?
Beneficiário:Alexandre Xavier Falcão
Modalidade de apoio: Auxílio à Pesquisa - Temático