Advanced search
Start date
Betweenand


EXPLAINABLE FACT-CHECKING THROUGH QUESTION ANSWERING

Full text
Author(s):
Yang, Jing ; Vega-Oliveros, Didier ; Seibt, Tais ; Rocha, Anderson ; IEEE
Total Authors: 5
Document type: Journal article
Source: 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP); v. N/A, p. 5-pg., 2022-01-01.
Abstract

Misleading or false information has been creating chaos in some places around the world. To mitigate this issue, many researchers have proposed automated fact-checking methods to fight the spread of fake news. However, most methods cannot explain the reasoning behind their decisions, failing to build trust between machines and humans using such technology. Trust is essential for fact-checking to be applied in the real world. Here, we address fact-checking explainability through question answering. In particular, we propose generating questions and answers from claims and answering the same questions from evidence. We also propose an answer comparison model with an attention mechanism attached to each question. Leveraging question answering as a proxy, we break down automated fact-checking into several steps - this separation aids models' explainability as it allows for more detailed analysis of their decision-making processes. Experimental results show that the proposed model can achieve state-of-the-art performance while providing reasonable explainable capabilities(1). (AU)

FAPESP's process: 19/26283-5 - Learning visual clues of the passage of time
Grantee:Didier Augusto Vega Oliveros
Support Opportunities: Scholarships in Brazil - Post-Doctoral
FAPESP's process: 17/12646-3 - Déjà vu: feature-space-time coherence from heterogeneous data for media integrity analytics and interpretation of events
Grantee:Anderson de Rezende Rocha
Support Opportunities: Research Projects - Thematic Grants
FAPESP's process: 19/04053-8 - Event reconstruction from heterogeneous visual data
Grantee:Jing Yang
Support Opportunities: Scholarships in Brazil - Doctorate