Temporal Sequence Learning in Spiking Neural Networks

Date
2011
Authors
Mandurah, Belal
Supervisor
Mohemmed, Ammar
Item type
Dissertation
Degree name
Master of Computer and Information Sciences
Journal Title
Journal ISSN
Volume Title
Publisher
Auckland University of Technology
Abstract

Spiking Neural Networks (SNN) are the third generation of artificial neural network (ANN). Like the brain’s neurons, they use spikes (pulses) to propagate information. Spike sequence learning has many applications for example in speech recognition and motor control. One of the main issues for sequence generation is learning. There are two main types of learning, unsupervised learning and supervised learning. Supervised learning, like Back Propagation (BP) is based on using a teacher signal to tune the connection weights to guide the network to produce a desired output for a specific input.

In this work, two supervised sequence learning schemes will be investigated. The first scheme uses particle swarm optimization (PSO). Using PSO, the SNN consists of multiple layers of neurons connected by dynamic synapses to increase its computation power. Due to the limitation in scalability of PSO, the second algorithm, SPAN, will be investigated which is able to use spatial temporal data. SPAN uses a simpler architecture; the network consists of a single neuron with multiple synapses. Though it originally uses static synapses, the synapses will be replaced with dynamic synapses to examine the impacts, if any exist, on the network and on learning performance. A performance evaluation of the two methods, using different configuration for the parameters of the dynamic synapses and using different input data train, will be undertaken, as well as an evaluation of SPAN when the synapses are replaced with dynamic ones. The main research question is how the dynamic synapses affect the learning and performance of the two above learning schemes. The sequence learning algorithm with PSO proved more complicated to optimize several parameters than optimizing one parameter per synapse with SPAN. Learning multiple inputs using SPAN with dynamic synapses proved faster than using static synapses. Nonetheless, memorizing the sequences using SPAN with the dynamic synapses did not show any improvement. SPAN has been proven to be able to successfully learn and classify multiple sequence trains with a simpler model and with the fine tuning of one parameter per synapse. This is a very interesting area to work in, and further future work can be done to improve each system.

Description
Keywords
Spiking Neural Networks , Supervised Learning Algorithms , Dynamic Synapses , Static Synapses , Particle Swarm Optimization , Integrate and Fire Neuron models , Spike Pattern Association Neuron , Sequence Learning , Similarity Measure , Fitness function , Spatial temporal pattern recognition , Temporal Sequence Learning
Source
DOI
Publisher's version
Rights statement