Advanced search
Start date
Betweenand


NLS: An accurate and yet easy-to-interpret prediction method

Full text
Author(s):
Coscrato, Victor ; Inacio, Marco H. A. ; Botari, Tiago ; Izbicki, Rafael
Total Authors: 4
Document type: Journal article
Source: NEURAL NETWORKS; v. 162, p. 14-pg., 2023-03-09.
Abstract

Over the last years, the predictive power of supervised machine learning (ML) has undergone impres-sive advances, achieving the status of state of the art and super-human level in some applications. However, the employment rate of ML models in real-life applications is much slower than one would expect. One of the downsides of using ML solution-based technologies is the lack of user trust in the produced model, which is related to the black-box nature of these models. To leverage the application of ML models, the generated predictions should be easy to interpret while maintaining a high accuracy. In this context, we develop the Neural Local Smoother (NLS), a neural network architecture that yields accurate predictions with easy-to-obtain explanations. The key idea of NLS is to add a smooth local linear layer to a standard network. We show experiments that indicate that NLS leads to a predictive power that is comparable to state-of-the-art machine learning models, but that at the same time is easier to interpret.(c) 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). (AU)

FAPESP's process: 19/11321-9 - Neural networks in statistical inference problems
Grantee:Rafael Izbicki
Support Opportunities: Regular Research Grants
FAPESP's process: 17/06161-7 - Interpretability of deep networks
Grantee:Tiago Botari
Support Opportunities: Scholarships in Brazil - Post-Doctoral
FAPESP's process: 17/03363-8 - Interpretability and efficiency in hypothesis tests
Grantee:Rafael Izbicki
Support Opportunities: Regular Research Grants