A Component Based Knowledge Transfer Model for Deep Neural Networks

aut.author.twitter@seenunz
aut.embargoNoen_NZ
aut.thirdpc.containsNoen_NZ
dc.contributor.advisorNarayanan, Ajit
dc.contributor.advisorWhalley, Jacqueline
dc.contributor.authorSremath Tirumala, Sreenivas
dc.date.accessioned2020-10-15T20:52:11Z
dc.date.available2020-10-15T20:52:11Z
dc.date.copyright2020
dc.date.issued2020
dc.date.updated2020-10-15T01:05:35Z
dc.description.abstractThis thesis explores the idea that features extracted from deep neural networks (DNNs) through layered weight analysis are knowledge components and are transferable. Among the components extracted from the various layers, middle layer components are shown to constitute knowledge that is mainly responsible for the accuracy of deep architectures including deep autoencoders (DAEs), deep belief networks (DBNs) and DNNs. The proposed component-based transfer of knowledge is shown to be efficient when applied to a variety of benchmark datasets including handwritten character recognition, image recognition, speech analysis, gene expression, as well as hierarchical feature datasets. The importance of hidden layer and its position in the topology of Artificial Neural Networks (ANNs) is under-researched in comparison to the deployment of new architectures, components and learning algorithms. This thesis addresses this imbalance by providing an insight into what actually is learned by a neural network. This is because recent advances in layer-wise training enable us to explore systematically and rigorously the features that expose hidden layer by hidden layer in deep architectures. The key contribution of this research is providing a transferable component model by extracting knowledge components from hidden layers. This thesis also provides an approach to determine the contribution of individual layers, thus providing an insight into the topological constraints that require addressing while designing a transfer learning model. Such transfer learning can mitigate the problem of needing to train each neural network ‘from scratch.’ This is important since deep learning currently can be slow and require large amounts of processing power. “Warm started” deep learning may open new avenues of research, especially in areas where ‘portable’ deep architectures can be deployed for decision making.en_NZ
dc.identifier.urihttps://hdl.handle.net/10292/13718
dc.language.isoenen_NZ
dc.publisherAuckland University of Technology
dc.rights.accessrightsOpenAccess
dc.subjectDeep Learningen_NZ
dc.subjectFeature Extractionen_NZ
dc.subjectKnolwedge Discoveryen_NZ
dc.subjectTransferable knowledgeen_NZ
dc.subjectArtificial Neural Networksen_NZ
dc.subjectDeep Neural Networksen_NZ
dc.titleA Component Based Knowledge Transfer Model for Deep Neural Networksen_NZ
dc.typeThesisen_NZ
thesis.degree.grantorAuckland University of Technology
thesis.degree.levelDoctoral Theses
thesis.degree.nameDoctor of Philosophyen_NZ
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
SremathTirumalaS.pdf
Size:
5.35 MB
Format:
Adobe Portable Document Format
Description:
Thesis
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
889 B
Format:
Item-specific license agreed upon to submission
Description:
Collections