Advanced search
Start date
Betweenand


Understanding attention-based encoder-decoder networks: a case study with chess scoresheet recognition

Full text
Author(s):
Hayashi, Sergio Y. ; Hirata, Nina S. T. ; IEEE
Total Authors: 3
Document type: Journal article
Source: 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR); v. N/A, p. 7-pg., 2022-01-01.
Abstract

Deep neural networks are largely used for complex prediction tasks. There is plenty of empirical evidence of their successful end-to-end training for a diversity of tasks. Success is often measured based solely on the final performance of the trained network, and explanations on when, why and how they work are less emphasized. In this paper we study encoder-decoder recurrent neural networks with attention mechanisms for the task of reading handwritten chess scoresheets. Rather than prediction performance, our concern is to better understand how learning occurs in these type of networks. We characterize the task in terms of three subtasks, namely input-output alignment, sequential pattern recognition, and handwriting recognition, and experimentally investigate which factors affect their learning. We identify competition, collaboration and dependence relations between the subtasks, and argue that such knowledge might help one to better balance factors to properly train a network. (AU)

FAPESP's process: 15/22308-2 - Intermediate representations in Computational Science for knowledge discovery
Grantee:Roberto Marcondes Cesar Junior
Support Opportunities: Research Projects - Thematic Grants
FAPESP's process: 17/25835-9 - Understanding images and deep learning models
Grantee:Nina Sumiko Tomita Hirata
Support Opportunities: Research Grants - Research Partnership for Technological Innovation - PITE