Advanced search
Start date

Consistent and explainable natural language inference


Natural Language Processing (NLP) refers to a branch of artificial intelligence (AI) concerned with giving computers the ability of interpreting and reasoning over human languages. Intrinsically multidisciplinary, it involves integrates diverse research areas, such as linguistics, logics, computer science, and artificial intelligence. In a world mainly guided by technology and increasingly supported by AI-based tools, the NLP field plays a fundamental hole, supporting technologies for machine translation (MT), question-answering (QA), information retrieval, text generation, and recommendation systems. Pressed by the advances of deep learning and data-driven approaches in other fields, such as computer vision and pattern recognition, NLP field have experienced the advent and popularity of black-box techniques. While such approaches are often very effective, they are often less interpretable. In this scenario, increasing interest in explanations can be observed and a growing understanding of the importance of explainability. This research project aims to investigate the construction of explanatory chains for natural language inference based on contextual rank analysis and consistency. Such research direction represents an intersection of research interests between groups of State University of São Paulo (UNESP) and the University of Manchester. (AU)

Articles published in Agência FAPESP Newsletter about the research grant:
Articles published in other media outlets (0 total):
More itemsLess items

Please report errors in scientific publications list using this form.