Kuo, MatthewAllen, NathanJung, Jane2026-04-162026-04-162026http://hdl.handle.net/10292/20938As the demand for energy-efficient Artificial Intelligence (AI) grows, Spiking Neural Networks (SNNs) have emerged as a leading neuromorphic alternative to traditional deep learning. However, the complex, non-linear temporal dynamics of SNNs often result in a black box nature, hindering their adoption in high-stakes domains such as cybersecurity and clinical healthcare. This thesis proposes a novel framework, the SNN-based MLP (SNN-MLP), which utilises a non-spiking surrogate model to interpret the decision-making logic of a trained SNN. By mapping the high-dimensional activity of spiking ensembles to a differentiable architecture, the framework enables the application of post-hoc Explainable Artificial Intelligence (XAI) techniques, such as Shapley Additive ExPlanation (SHAP), to provide transparent feature-level explanations. The framework is validated through two diverse case studies. The first case study evaluates network traffic for detection, and demonstrates a successful translation of single-dimensional SNNs into Multi-Layer Perceptrons (MLPs) and the resultant SNN-MLP successfully identifies malicious features with up to 87% accuracy, and also aligns in terms of interpretability with a baseline MLP. The second case study applies the framework to personalised depression modelling using more complex SNN architectures along with multimodal datasets. The results demonstrate that the SNN-MLP functions as a high-fidelity surrogate for multi-dimensional SNNs as well, identifying clinically relevant triggers-primarily anxiety and dietary factors-that align with established benchmarks. While the study notes challenges regarding data scarcity and class imbalance in clinical settings, the consistent overlap between the SNN and its surrogate proves that spiking architectures can achieve competitive predictive performance without sacrificing interpretability. Thus, this thesis provides a foundation for mathematically grounded, energy-efficient, and explainable AI, offering a pathway towards the deployment of explainable SNN systems in safety-critical and security infrastructures.enExplainable Artificial Intelligence Methods for Spiking Neural NetworksThesisOpenAccess