Advanced search
Start date
Betweenand


Visualizing Learning Space in Neural Network Hidden Layers

Full text
Author(s):
Cantareira, Gabriel D. ; Paulovich, Fernando, V ; Etemad, Elham ; Kerren, A ; Hurter, C ; Braz, J
Total Authors: 6
Document type: Journal article
Source: VISAPP: PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 4: VISAPP; v. N/A, p. 12-pg., 2020-01-01.
Abstract

Analyzing and understanding how abstract representations of data are formed inside deep neural networks is a complex task. Among the different methods that have been developed to tackle this problem, multidimensional projection techniques have attained positive results in displaying the relationships between data instances, network layers or class features. However, these techniques are often static and lack a way to properly keep a stable space between observations and properly convey flow in such space. In this paper, we employ different dimensionality reduction techniques to create a visual space where the flow of information inside hidden layers can come to light. We discuss the application of each used tool and provide experiments that show how they can be combined to highlight new information about neural network optimization processes. (AU)

FAPESP's process: 17/08817-7 - Distance Learning and Inverse Mapping of Visualizations Applied to Text Mining
Grantee:Gabriel Dias Cantareira
Support Opportunities: Scholarships abroad - Research Internship - Doctorate
FAPESP's process: 15/08118-6 - Inverse Mapping: Employing Interactive Manipulation to Transform Computational Models
Grantee:Gabriel Dias Cantareira
Support Opportunities: Scholarships in Brazil - Doctorate