Busca avançada
Ano de início
Entree


NLS: An accurate and yet easy-to-interpret prediction method

Texto completo
Autor(es):
Coscrato, Victor ; Inacio, Marco H. A. ; Botari, Tiago ; Izbicki, Rafael
Número total de Autores: 4
Tipo de documento: Artigo Científico
Fonte: NEURAL NETWORKS; v. 162, p. 14-pg., 2023-03-09.
Resumo

Over the last years, the predictive power of supervised machine learning (ML) has undergone impres-sive advances, achieving the status of state of the art and super-human level in some applications. However, the employment rate of ML models in real-life applications is much slower than one would expect. One of the downsides of using ML solution-based technologies is the lack of user trust in the produced model, which is related to the black-box nature of these models. To leverage the application of ML models, the generated predictions should be easy to interpret while maintaining a high accuracy. In this context, we develop the Neural Local Smoother (NLS), a neural network architecture that yields accurate predictions with easy-to-obtain explanations. The key idea of NLS is to add a smooth local linear layer to a standard network. We show experiments that indicate that NLS leads to a predictive power that is comparable to state-of-the-art machine learning models, but that at the same time is easier to interpret.(c) 2023 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). (AU)

Processo FAPESP: 19/11321-9 - Redes neurais em problemas de inferência estatística
Beneficiário:Rafael Izbicki
Modalidade de apoio: Auxílio à Pesquisa - Regular
Processo FAPESP: 17/06161-7 - Interpretabilidade de redes profundas
Beneficiário:Tiago Botari
Modalidade de apoio: Bolsas no Brasil - Pós-Doutorado
Processo FAPESP: 17/03363-8 - Interpretabilidade e eficiência em testes de hipótese
Beneficiário:Rafael Izbicki
Modalidade de apoio: Auxílio à Pesquisa - Regular