Repository logo
 

Advancing Explainable AI: A Global Context-Aware Evidence Retrieval Framework for Interpretable Fact Verification

aut.embargoNo
dc.contributor.advisorNand, Parma
dc.contributor.advisorYan, Wei Qi
dc.contributor.advisorAllende-Cid, Héctor
dc.contributor.advisorVamathevan, Thamilini
dc.contributor.authorVallayil Vijayalekshmi, Manju
dc.date.accessioned2025-09-05T01:49:17Z
dc.date.available2025-09-05T01:49:17Z
dc.date.issued2025
dc.description.abstractThe increasing reliance on Artificial Intelligence (AI) for decision-making across various domains necessitates the development of explainable AI (XAI) systems that enhance transparency and interpretability. However, despite growing research interest in XAI, ensuring transparency in Automated Fact Verification (AFV) remains a significant challenge, largely due to the architectural complexity of these systems and their reliance on local post-hoc explanations, which focus on claim-level context or annotated evidence while often overlooking broader thematic connections. Meanwhile, retrieval-based approaches like Retrieval-Augmented Generation (RAG) integrate external information but may retrieve loosely related or overly broad evidence, affecting the coherence and factual accuracy of the generated explanations. This thesis addresses this gap by introducing CARAG, a Context-Aware Retrieval and Explanation Generation framework that integrates both local and global perspectives to improve retrieval transparency and explanation coherence. The research unfolds through four key manuscripts: (1) a comprehensive literature review that identifies major explainability gaps in AFV, particularly the lack of explanation-focused datasets and the overemphasis on local transparency, (2) an exploration of thematic context discovery as a means to uncover a claim’s broader, non-local context within the fact-checking corpus, laying the groundwork for context-aware evidence retrieval, (3) the introduction of CARAG, which builds on thematic context discovery to enhance evidence selection and explanation generation in AFV pipelines, alongside the development of FactVer, a dataset designed to support explainability-driven fact verification research, and (4) CARAG-u, an unsupervised extension that eliminates dependency on predefined thematic labels, making the CARAG framework more adaptable across diverse verification settings. Empirical evaluations demonstrate that integrating thematic context into retrieval enhances AFV explainability, bridging the gap between claim-specific justifications and broader knowledge patterns within fact-checking corpora. The findings contribute to the development of scalable and interpretable AFV models, reinforcing trust and transparency in AI-driven fact verification.
dc.identifier.urihttp://hdl.handle.net/10292/19764
dc.language.isoen
dc.publisherAuckland University of Technology
dc.rights.accessrightsOpenAccess
dc.titleAdvancing Explainable AI: A Global Context-Aware Evidence Retrieval Framework for Interpretable Fact Verification
dc.typeThesis
thesis.degree.grantorAuckland University of Technology
thesis.degree.nameDoctor of Philosophy

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
VijayalekshmiM.pdf
Size:
15.07 MB
Format:
Adobe Portable Document Format
Description:
Thesis

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
859 B
Format:
Item-specific license agreed upon to submission
Description:

Collections