Repository logo
 

Temporal Sequence Learning in Spiking Neural Networks

aut.embargoNoen_NZ
aut.thirdpc.containsNoen_NZ
aut.thirdpc.permissionNoen_NZ
aut.thirdpc.removedNoen_NZ
dc.contributor.advisorMohemmed, Ammar
dc.contributor.authorMandurah, Belal
dc.date.accessioned2012-04-25T22:21:26Z
dc.date.available2012-04-25T22:21:26Z
dc.date.copyright2011
dc.date.created2011
dc.date.issued2011
dc.date.updated2012-04-24T09:40:08Z
dc.description.abstractSpiking Neural Networks (SNN) are the third generation of artificial neural network (ANN). Like the brain’s neurons, they use spikes (pulses) to propagate information. Spike sequence learning has many applications for example in speech recognition and motor control. One of the main issues for sequence generation is learning. There are two main types of learning, unsupervised learning and supervised learning. Supervised learning, like Back Propagation (BP) is based on using a teacher signal to tune the connection weights to guide the network to produce a desired output for a specific input. In this work, two supervised sequence learning schemes will be investigated. The first scheme uses particle swarm optimization (PSO). Using PSO, the SNN consists of multiple layers of neurons connected by dynamic synapses to increase its computation power. Due to the limitation in scalability of PSO, the second algorithm, SPAN, will be investigated which is able to use spatial temporal data. SPAN uses a simpler architecture; the network consists of a single neuron with multiple synapses. Though it originally uses static synapses, the synapses will be replaced with dynamic synapses to examine the impacts, if any exist, on the network and on learning performance. A performance evaluation of the two methods, using different configuration for the parameters of the dynamic synapses and using different input data train, will be undertaken, as well as an evaluation of SPAN when the synapses are replaced with dynamic ones. The main research question is how the dynamic synapses affect the learning and performance of the two above learning schemes. The sequence learning algorithm with PSO proved more complicated to optimize several parameters than optimizing one parameter per synapse with SPAN. Learning multiple inputs using SPAN with dynamic synapses proved faster than using static synapses. Nonetheless, memorizing the sequences using SPAN with the dynamic synapses did not show any improvement. SPAN has been proven to be able to successfully learn and classify multiple sequence trains with a simpler model and with the fine tuning of one parameter per synapse. This is a very interesting area to work in, and further future work can be done to improve each system.en_NZ
dc.identifier.urihttps://hdl.handle.net/10292/4045
dc.language.isoenen_NZ
dc.publisherAuckland University of Technology
dc.rights.accessrightsOpenAccess
dc.subjectSpiking Neural Networksen_NZ
dc.subjectSupervised Learning Algorithmsen_NZ
dc.subjectDynamic Synapsesen_NZ
dc.subjectStatic Synapsesen_NZ
dc.subjectParticle Swarm Optimizationen_NZ
dc.subjectIntegrate and Fire Neuron modelsen_NZ
dc.subjectSpike Pattern Association Neuronen_NZ
dc.subjectSequence Learningen_NZ
dc.subjectSimilarity Measureen_NZ
dc.subjectFitness functionen_NZ
dc.subjectSpatial temporal pattern recognitionen_NZ
dc.subjectTemporal Sequence Learningen_NZ
dc.titleTemporal Sequence Learning in Spiking Neural Networksen_NZ
dc.typeDissertation
thesis.degree.discipline
thesis.degree.grantorAuckland University of Technology
thesis.degree.levelMasters Theses
thesis.degree.nameMaster of Computer and Information Sciencesen_NZ

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MandurahB.pdf
Size:
1.53 MB
Format:
Adobe Portable Document Format
Description:
Dissertation

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
897 B
Format:
Item-specific license agreed upon to submission
Description: