Advanced search
Start date
Betweenand


Trusting My Predictions: On the Value of Instance-Level Analysis

Full text
Author(s):
Lorena, Ana C. ; Paiva, Pedro Y. A. ; Prudencio, Ricardo B. C.
Total Authors: 3
Document type: Journal article
Source: ACM COMPUTING SURVEYS; v. 56, n. 7, p. 28-pg., 2024-07-01.
Abstract

Machine Learning solutions have spread along many domains, including critical applications. The development of such models usually relies on a dataset containing labeled data. This dataset is then split into training and test sets and the accuracy of the models in replicating the test labels is assessed. This process is often iterated in a cross-validation procedure for obtaining average performance estimates. But is the average of the predictive performance on test sets enough for assessing the trustfulness of a Machine Learning model? This paper discusses the importance of knowing which individual observations of a dataset are more challenging than others and how this characteristic can be measured and used in order to improve classification performance and trustfulness. A set of strategies formeasuring the hardness level of the instances of a dataset is surveyed and a Python package containing their implementation is provided. (AU)

FAPESP's process: 21/06870-3 - Beyond algorithm selection: meta-learning for data and algorithm analysis and understanding
Grantee:Ana Carolina Lorena
Support Opportunities: Research Grants - Young Investigators Grants - Phase 2