Advanced search
Start date
Betweenand
(Reference retrieved automatically from Web of Science through information on FAPESP grant and its corresponding number as mentioned in the publication by the authors.)

On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study

Full text
Author(s):
Campos, Guilherme O. [1] ; Zimek, Arthur [2] ; Sander, Jorg [3] ; Campello, Ricardo J. G. B. [1] ; Micenkova, Barbora [4] ; Schubert, Erich [2] ; Assent, Ira [4] ; Houle, Michael E. [5]
Total Authors: 8
Affiliation:
[1] Univ Sao Paulo, SCC ICMC USP, CP 668, BR-13566590 Sao Carlos, SP - Brazil
[2] Univ Munich, D-80538 Munich - Germany
[3] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E8 - Canada
[4] Aarhus Univ, Dept Comp Sci, Aabogade 34, DK-8200 Aarhus - Denmark
[5] Natl Inst Informat, Chiyoda Ku, 2-1-2 Hitotsubashi, Tokyo 1018430 - Japan
Total Affiliations: 5
Document type: Journal article
Source: DATA MINING AND KNOWLEDGE DISCOVERY; v. 30, n. 4, p. 891-927, JUL 2016.
Web of Science Citations: 71
Abstract

The evaluation of unsupervised outlier detection algorithms is a constant challenge in data mining research. Little is known regarding the strengths and weaknesses of different standard outlier detection models, and the impact of parameter choices for these algorithms. The scarcity of appropriate benchmark datasets with ground truth annotation is a significant impediment to the evaluation of outlier methods. Even when labeled datasets are available, their suitability for the outlier detection task is typically unknown. Furthermore, the biases of commonly-used evaluation measures are not fully understood. It is thus difficult to ascertain the extent to which newly-proposed outlier detection methods improve over established methods. In this paper, we perform an extensive experimental study on the performance of a representative set of standard k nearest neighborhood-based methods for unsupervised outlier detection, across a wide variety of datasets prepared for this purpose. Based on the overall performance of the outlier detection methods, we provide a characterization of the datasets themselves, and discuss their suitability as outlier detection benchmark sets. We also examine the most commonly-used measures for comparing the performance of different methods, and suggest adaptations that are more suitable for the evaluation of outlier detection results. (AU)

FAPESP's process: 13/18698-4 - Methods and algorithms in unsupervised and semi-supervised machine learning
Grantee:Ricardo José Gabrielli Barreto Campello
Support Opportunities: Regular Research Grants