Advancing Explainable AI: A Global Context-Aware Evidence Retrieval Framework for Interpretable Fact Verification
| aut.embargo | No | |
| dc.contributor.advisor | Nand, Parma | |
| dc.contributor.advisor | Yan, Wei Qi | |
| dc.contributor.advisor | Allende-Cid, Héctor | |
| dc.contributor.advisor | Vamathevan, Thamilini | |
| dc.contributor.author | Vallayil Vijayalekshmi, Manju | |
| dc.date.accessioned | 2025-09-05T01:49:17Z | |
| dc.date.available | 2025-09-05T01:49:17Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | The increasing reliance on Artificial Intelligence (AI) for decision-making across various domains necessitates the development of explainable AI (XAI) systems that enhance transparency and interpretability. However, despite growing research interest in XAI, ensuring transparency in Automated Fact Verification (AFV) remains a significant challenge, largely due to the architectural complexity of these systems and their reliance on local post-hoc explanations, which focus on claim-level context or annotated evidence while often overlooking broader thematic connections. Meanwhile, retrieval-based approaches like Retrieval-Augmented Generation (RAG) integrate external information but may retrieve loosely related or overly broad evidence, affecting the coherence and factual accuracy of the generated explanations. This thesis addresses this gap by introducing CARAG, a Context-Aware Retrieval and Explanation Generation framework that integrates both local and global perspectives to improve retrieval transparency and explanation coherence. The research unfolds through four key manuscripts: (1) a comprehensive literature review that identifies major explainability gaps in AFV, particularly the lack of explanation-focused datasets and the overemphasis on local transparency, (2) an exploration of thematic context discovery as a means to uncover a claim’s broader, non-local context within the fact-checking corpus, laying the groundwork for context-aware evidence retrieval, (3) the introduction of CARAG, which builds on thematic context discovery to enhance evidence selection and explanation generation in AFV pipelines, alongside the development of FactVer, a dataset designed to support explainability-driven fact verification research, and (4) CARAG-u, an unsupervised extension that eliminates dependency on predefined thematic labels, making the CARAG framework more adaptable across diverse verification settings. Empirical evaluations demonstrate that integrating thematic context into retrieval enhances AFV explainability, bridging the gap between claim-specific justifications and broader knowledge patterns within fact-checking corpora. The findings contribute to the development of scalable and interpretable AFV models, reinforcing trust and transparency in AI-driven fact verification. | |
| dc.identifier.uri | http://hdl.handle.net/10292/19764 | |
| dc.language.iso | en | |
| dc.publisher | Auckland University of Technology | |
| dc.rights.accessrights | OpenAccess | |
| dc.title | Advancing Explainable AI: A Global Context-Aware Evidence Retrieval Framework for Interpretable Fact Verification | |
| dc.type | Thesis | |
| thesis.degree.grantor | Auckland University of Technology | |
| thesis.degree.name | Doctor of Philosophy |
