Busca avançada
Ano de início
Entree


External validation and interpretability of machine learning-based risk prediction for major adverse cardiovascular events

Texto completo
Autor(es):
Shimizu, Gilson Yuuji ; Romao, Elen Almeida ; Cardeal da Costa, Jose Abrao ; Mazzoncini de Azevedo-Marques, Joao ; Scarpelini, Sandro ; Firmino Suzuki, Katia Mitiko ; Cesar, Hilton Vicente ; Azevedo-Marques, Paulo M.
Número total de Autores: 8
Tipo de documento: Artigo Científico
Fonte: 2024 IEEE 37TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS 2024; v. N/A, p. 6-pg., 2024-01-01.
Resumo

Studies of cardiovascular disease risk prediction by machine learning algorithms often do not assess their ability to generalize to other populations and few of them include an analysis of the interpretability of individual predictions. This manuscript addresses the development and internal and external validation of predictive models for the assessment of risks of major adverse cardiovascular events. Global and local interpretability analyses of predictions were conducted towards improving model reliability and tailoring preventive interventions. The models were trained and validated in a retrospective cohort with the use of data from Hospital das Clinicas da Faculdade de Medicina de Ribeirao Preto, Brazil. Data from Beth Israel Deaconess Medical Center, USA, were used for external validation. Eight machine learning algorithms, namely Penalized Logistic Regression, Random Forest, XGBoost, Decision Tree, Support Vector Machine, k-Nearest Neighbors, Naive Bayes and Multi-Layer Perceptron were trained to predict a 5-year risk of major adverse cardiovascular events and their predictive performance was evaluated regarding accuracy, ROC curve (receiver operating characteristic), and AUC (area under the ROC curve). LIME and Shapley values methods interpreted individual predictions. Random Forest showed the best predictive performance in both internal validation (AUC = 0.87; Accuracy = 0.79) and external one (AUC = 0.79; Accuracy = 0.71). Compared to LIME, Shapley values provided explanations more consistent with exploratory analysis and importance of features. Among the machine learning algorithms evaluated, Random Forest showed the best generalization ability, both internally and externally, and Shapley values for local interpretability were more informative than LIME ones, which is in line with our exploratory analysis and global interpretation of the final model. Machine learning algorithms with good generalization and accompanied by interpretability analyses are recommended for assessments of individual risks of cardiovascular diseases and development of personalized preventive actions. (AU)

Processo FAPESP: 14/50889-7 - INCT 2014: em Medicina Assistida por Computação Científica (INCT-MACC)
Beneficiário:José Eduardo Krieger
Modalidade de apoio: Auxílio à Pesquisa - Temático
Processo FAPESP: 21/06137-4 - Prevendo eventos cardiovasculares usando aprendizado de máquina
Beneficiário:Paulo Mazzoncini de Azevedo Marques
Modalidade de apoio: Auxílio à Pesquisa - Regular
Processo FAPESP: 22/16683-9 - Aprendizado federado para validação de modelos de aprendizado de máquina treinados em diferentes redes de hospitais
Beneficiário:Gilson Yuuji Shimizu
Modalidade de apoio: Bolsas no Brasil - Pós-Doutorado
Processo FAPESP: 23/01695-4 - Validação e melhoria de modelos de aprendizado de máquina para previsão de eventos cardiovasculares
Beneficiário:Hilton Vicente César
Modalidade de apoio: Bolsas no Brasil - Programa Capacitação - Treinamento Técnico