Repository logo
 

Explainable Artificial Intelligence Methods for Spiking Neural Networks

aut.thirdpc.containsNo
dc.contributor.advisorKuo, Matthew
dc.contributor.advisorAllen, Nathan
dc.contributor.authorJung, Jane
dc.date.accessioned2026-04-16T23:18:43Z
dc.date.available2026-04-16T23:18:43Z
dc.date.issued2026
dc.description.abstractAs the demand for energy-efficient Artificial Intelligence (AI) grows, Spiking Neural Networks (SNNs) have emerged as a leading neuromorphic alternative to traditional deep learning. However, the complex, non-linear temporal dynamics of SNNs often result in a black box nature, hindering their adoption in high-stakes domains such as cybersecurity and clinical healthcare. This thesis proposes a novel framework, the SNN-based MLP (SNN-MLP), which utilises a non-spiking surrogate model to interpret the decision-making logic of a trained SNN. By mapping the high-dimensional activity of spiking ensembles to a differentiable architecture, the framework enables the application of post-hoc Explainable Artificial Intelligence (XAI) techniques, such as Shapley Additive ExPlanation (SHAP), to provide transparent feature-level explanations. The framework is validated through two diverse case studies. The first case study evaluates network traffic for detection, and demonstrates a successful translation of single-dimensional SNNs into Multi-Layer Perceptrons (MLPs) and the resultant SNN-MLP successfully identifies malicious features with up to 87% accuracy, and also aligns in terms of interpretability with a baseline MLP. The second case study applies the framework to personalised depression modelling using more complex SNN architectures along with multimodal datasets. The results demonstrate that the SNN-MLP functions as a high-fidelity surrogate for multi-dimensional SNNs as well, identifying clinically relevant triggers-primarily anxiety and dietary factors-that align with established benchmarks. While the study notes challenges regarding data scarcity and class imbalance in clinical settings, the consistent overlap between the SNN and its surrogate proves that spiking architectures can achieve competitive predictive performance without sacrificing interpretability. Thus, this thesis provides a foundation for mathematically grounded, energy-efficient, and explainable AI, offering a pathway towards the deployment of explainable SNN systems in safety-critical and security infrastructures.
dc.identifier.urihttp://hdl.handle.net/10292/20938
dc.language.isoen
dc.publisherAuckland University of Technology
dc.rights.accessrightsOpenAccess
dc.titleExplainable Artificial Intelligence Methods for Spiking Neural Networks
dc.typeThesis
thesis.degree.grantorAuckland University of Technology
thesis.degree.nameMaster of Computer and Information Sciences

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
JungJ.pdf
Size:
3.51 MB
Format:
Adobe Portable Document Format
Description:
Thesis

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
890 B
Format:
Item-specific license agreed upon to submission
Description:

Collections