Busca avançada
Ano de início
Entree
(Referência obtida automaticamente do Web of Science, por meio da informação sobre o financiamento pela FAPESP e o número do processo correspondente, incluída na publicação pelos autores.)

Rank-based self-training for graph convolutional networks

Texto completo
Autor(es):
Guimaraes Pedronette, Daniel Carlos [1] ; Latecki, Longin Jan [2]
Número total de Autores: 2
Afiliação do(s) autor(es):
[1] Sao Paulo State Univ, UNESP, Dept Stat Appl Math & Comp DEMAC, Rio Claro - Brazil
[2] Temple Univ, Dept Comp & Informat Sci, Philadelphia, PA 19122 - USA
Número total de Afiliações: 2
Tipo de documento: Artigo Científico
Fonte: INFORMATION PROCESSING & MANAGEMENT; v. 58, n. 2 MAR 2021.
Citações Web of Science: 0
Resumo

Graph Convolutional Networks (GCNs) have been established as a fundamental approach for representation learning on graphs, based on convolution operations on non-Euclidean domain, defined by graph-structured data. GCNs and variants have achieved state-of-the-art results on classification tasks, especially in semi-supervised learning scenarios. A central challenge in semi supervised classification consists in how to exploit the maximum of useful information encoded in the unlabeled data. In this paper, we address this issue through a novel self-training approach for improving the accuracy of GCNs on semi-supervised classification tasks. A margin score is used through a rank-based model to identify the most confident sample predictions. Such predictions are exploited as an expanded labeled set in a second-stage training step. Our model is suitable for different GCN models. Moreover, we also propose a rank aggregation of labeled sets obtained by different GCN models. The experimental evaluation considers four GCN variations and traditional benchmarks extensively used in the literature. Significant accuracy gains were achieved for all evaluated models, reaching results comparable or superior to the state-of-theart. The best results were achieved for rank aggregation self-training on combinations of the four GCN models. (AU)

Processo FAPESP: 17/25908-6 - Aprendizado fracamente supervisionado para análise de vídeos no domínio comprimido em tarefas de recuperação e classificação para alertas visuais
Beneficiário:João Paulo Papa
Modalidade de apoio: Auxílio à Pesquisa - Parceria para Inovação Tecnológica - PITE
Processo FAPESP: 18/15597-6 - Aplicação e investigação de métodos de aprendizado não-supervisionado em tarefas de recuperação e classificação
Beneficiário:Daniel Carlos Guimarães Pedronette
Modalidade de apoio: Auxílio à Pesquisa - Jovens Pesquisadores - Fase 2