Busca avançada
Ano de início
Entree


Take It Easy: Label-Adaptive Self-Rationalization for Fact Verification and Explanation Generation

Texto completo
Autor(es):
Yang, Jing ; Rocha, Anderson
Número total de Autores: 2
Tipo de documento: Artigo Científico
Fonte: 2024 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY, WIFS 2024; v. N/A, p. 6-pg., 2024-01-01.
Resumo

Fact verification is a crucial process in journalism for combating disinformation. Computational methods to aid journalists in the task often require adapting a model to specific domains and generating explanations. However, most automated fact-checking methods rely on three-class datasets, which do not accurately reflect real-world misinformation. Moreover, fact-checking explanations are often generated based on text summarization of evidence, failing to address the relationship between the claim and the evidence. To address these issues, we extend the self-rationalization method-typically used in natural language inference (NLI) tasks-to fact verification. Self-rationalization refers to a model's ability to generate explanations or justifications for its responses, which is essential for reliable fact-checking. We propose a label-adaptive learning approach: first, we fine-tune a model to learn veracity prediction with annotated labels (step-1 model). Then, we fine-tune the step-1 model again to learn self-rationalization, using the same data and additional annotated explanations. This approach allows the model to adapt to a new domain more effectively than fine-tuning end-to-end self-rationalization directly. Our results show that our label-adaptive approach improves veracity prediction by more than ten percentage points (Macro F1) on both the PubHealth and AVeriTec datasets, outperforming the GPT-4 model. Furthermore, to address the high cost of explanation annotation, we generated 64 synthetic explanations from three large language models: GPT-4-turbo, GPT-3.5-turbo, and Llama-3-8B and few-shot fine-tune our step-1 model. The few-shot synthetic explanation fine-tuned model performed comparably to the fully fine-tuned self-rationalization model, demonstrating the potential of low-budget learning with synthetic data. Our label-adaptive self-rationalization approach presents a promising direction for future research on real-world explainable fact-checking with different labeling schemes(1). (AU)

Processo FAPESP: 23/12865-8 - Horus: técnicas de inteligência artificial para detecção e análise de realidades sintéticas
Beneficiário:Anderson de Rezende Rocha
Modalidade de apoio: Auxílio à Pesquisa - Temático
Processo FAPESP: 19/04053-8 - Reconstrução de eventos a partir de dados visuais heterogêneos
Beneficiário:Jing Yang
Modalidade de apoio: Bolsas no Brasil - Doutorado