Advanced search
Start date
Betweenand


Generating structural alerts from toxicology datasets using the local interpretable model-agnostic explanations method

Full text
Author(s):
Nascimento, Cayque Monteiro Castro ; Moura, Paloma Guimaraes ; Pimentel, Andre Silva
Total Authors: 3
Document type: Journal article
Source: DIGITAL DISCOVERY; v. 2, n. 5, p. 15-pg., 2023-10-09.
Abstract

The local interpretable model-agnostic explanations method was used to interpret a machine learning model of toxicology generated by a neural network multitask classifier method. The model was trained and validated using the Tox21 dataset and tested against the Clintox and Sider datasets, which are datasets of marketed drugs with adverse reactions and drugs approved by the Federal Drug Administration that have failed clinical trials for toxicity reasons. The stability of the explanations is proved here with a reasonable reproducibility of the sampling process, making very similar and trustful explanations. The explanation model was created to produce structural alerts with more than 6 heavy atoms that serve as toxic alerts for researchers in many fields of academics, regulatory agencies and industry such as organic synthesis, pharmaceuticals, toxicology, and so on. The local interpretable model-agnostic explanations method was used to interpret a machine learning model of toxicology generated by a neural network multitask classifier method. (AU)

FAPESP's process: 14/50983-3 - INCT 2014: complex fluids
Grantee:Antonio Martins Figueiredo Neto
Support Opportunities: Research Projects - Thematic Grants