AUT LibraryAUT
View Item 
  •   Open Theses & Dissertations
  • Doctoral Theses
  • View Item
  •   Open Theses & Dissertations
  • Doctoral Theses
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

A Component Based Knowledge Transfer Model for Deep Neural Networks

Sremath Tirumala, Sreenivas
Thumbnail
View/Open
Thesis (5.352Mb)
Permanent link
http://hdl.handle.net/10292/13718
Metadata
Show full metadata
Abstract
This thesis explores the idea that features extracted from deep neural networks (DNNs) through layered weight analysis are knowledge components and are transferable. Among the components extracted from the various layers, middle layer components are shown to constitute knowledge that is mainly responsible for the accuracy of deep architectures including deep autoencoders (DAEs), deep belief networks (DBNs) and DNNs. The proposed component-based transfer of knowledge is shown to be efficient when applied to a variety of benchmark datasets including handwritten character recognition, image recognition, speech analysis, gene expression, as well as hierarchical feature datasets.

The importance of hidden layer and its position in the topology of Artificial Neural Networks (ANNs) is under-researched in comparison to the deployment of new architectures, components and learning algorithms. This thesis addresses this imbalance by providing an insight into what actually is learned by a neural network. This is because recent advances in layer-wise training enable us to explore systematically and rigorously the features that expose hidden layer by hidden layer in deep architectures.

The key contribution of this research is providing a transferable component model by extracting knowledge components from hidden layers. This thesis also provides an approach to determine the contribution of individual layers, thus providing an insight into the topological constraints that require addressing while designing a transfer learning model. Such transfer learning can mitigate the problem of needing to train each neural network ‘from scratch.’ This is important since deep learning currently can be slow and require large amounts of processing power. “Warm started” deep learning may open new avenues of research, especially in areas where ‘portable’ deep architectures can be deployed for decision making.
Keywords
Deep Learning; Feature Extraction; Knolwedge Discovery; Transferable knowledge; Artificial Neural Networks; Deep Neural Networks
Date
2020
Item Type
Thesis
Supervisor(s)
Narayanan, Ajit; Whalley, Jacqueline
Degree Name
Doctor of Philosophy
Publisher
Auckland University of Technology

Contact Us
  • Admin

Hosted by Tuwhera, an initiative of the Auckland University of Technology Library

 

 

Browse

Open Theses & DissertationsTitlesAuthorsDateThesis SupervisorDoctoral ThesesTitlesAuthorsDateThesis Supervisor

Alternative metrics

 

Statistics

For this itemFor all Open Theses & Dissertations

Share

 
Follow @AUT_SC

Contact Us
  • Admin

Hosted by Tuwhera, an initiative of the Auckland University of Technology Library