A Component Based Knowledge Transfer Model for Deep Neural Networks

Date
2020
Authors
Sremath Tirumala, Sreenivas
Supervisor
Narayanan, Ajit
Whalley, Jacqueline
Item type
Thesis
Degree name
Doctor of Philosophy
Journal Title
Journal ISSN
Volume Title
Publisher
Auckland University of Technology
Abstract

This thesis explores the idea that features extracted from deep neural networks (DNNs) through layered weight analysis are knowledge components and are transferable. Among the components extracted from the various layers, middle layer components are shown to constitute knowledge that is mainly responsible for the accuracy of deep architectures including deep autoencoders (DAEs), deep belief networks (DBNs) and DNNs. The proposed component-based transfer of knowledge is shown to be efficient when applied to a variety of benchmark datasets including handwritten character recognition, image recognition, speech analysis, gene expression, as well as hierarchical feature datasets.

The importance of hidden layer and its position in the topology of Artificial Neural Networks (ANNs) is under-researched in comparison to the deployment of new architectures, components and learning algorithms. This thesis addresses this imbalance by providing an insight into what actually is learned by a neural network. This is because recent advances in layer-wise training enable us to explore systematically and rigorously the features that expose hidden layer by hidden layer in deep architectures.

The key contribution of this research is providing a transferable component model by extracting knowledge components from hidden layers. This thesis also provides an approach to determine the contribution of individual layers, thus providing an insight into the topological constraints that require addressing while designing a transfer learning model. Such transfer learning can mitigate the problem of needing to train each neural network ‘from scratch.’ This is important since deep learning currently can be slow and require large amounts of processing power. “Warm started” deep learning may open new avenues of research, especially in areas where ‘portable’ deep architectures can be deployed for decision making.

Description
Keywords
Deep Learning , Feature Extraction , Knolwedge Discovery , Transferable knowledge , Artificial Neural Networks , Deep Neural Networks
Source
DOI
Publisher's version
Rights statement
Collections