Repository logo
 

Human Facial Emotion Recognition From Digital Images Using Deep Learning

aut.embargoNoen_NZ
aut.thirdpc.containsNoen_NZ
dc.contributor.advisorYan, Wei Qi
dc.contributor.authorAlexander, Rivenski
dc.date.accessioned2022-11-29T23:16:14Z
dc.date.available2022-11-29T23:16:14Z
dc.date.copyright2022
dc.date.issued2022
dc.date.updated2022-11-29T23:00:35Z
dc.description.abstractIn this thesis, our aim is to investigate the best-performing methods and algorithms of facial emotion recognition (FER) based on the seven classes of human facial expressions of emotion: Neutral, scared, angry, disgusted, sad, happy, and surprised. We classify human facial emotions from digital images. The existing methodology of FER has various limitations: low training and testing result, and difficulty in classifying certain emotions. Convolutional Neural Network (CNN), Xception, Visual Transformers (ViT), Simple Deep Neural Network (SDNN), and Graph Convolution Neural Network (GCN) have been proposed to the FER with two types of methods: Non-facial landmarking and facial landmarking. The first method that we proposed is the modified CNN with Haar Cascade algorithm for frontal face detection as an initial solution. While running our experiments in real-time, our accuracy for FER is up to 90.0%. Our first effort was with the same methodology and parameters as our initial method Mini Xception. With the Mini Xception model, we achieved stable result with the highest accuracy of 99%. In addition, with the popularity of the deep learning algorithm Transformers, we implemented Visual Transformer with the non-landmarking method. With the training and testing work conducted based on the proposed model, we achieved the highest training accuracy in fewer number of training epoch compared to the previous two models. Due to misclassification that occurred in the previous methods, we developed a new method of facial landmarking. The models we proposed are Simple Deep Neural Network (SDNN) and Graph Convolutional Network (GCN). Our SDNN model was employed as a baseline to test the proposed method. With this model, we achieved 96.0% accuracy in our real-time testing experiments. In future, we plan to train and test the GCN model with facial landmarking method to determine the best performing model. In addition, we will implement the ViT model in real-time using the non-landmarking method.en_NZ
dc.identifier.urihttps://hdl.handle.net/10292/15679
dc.language.isoenen_NZ
dc.publisherAuckland University of Technology
dc.rights.accessrightsOpenAccess
dc.subjectFacial expressions of emotion; Facial emotion recognition; Simple deep neural network (SDNN); Xceptionen_NZ
dc.titleHuman Facial Emotion Recognition From Digital Images Using Deep Learningen_NZ
dc.typeThesisen_NZ
thesis.degree.grantorAuckland University of Technology
thesis.degree.levelMasters Theses
thesis.degree.nameMaster of Computer and Information Sciencesen_NZ

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AlexanderR.pdf
Size:
3.05 MB
Format:
Adobe Portable Document Format
Description:
Thesis

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
897 B
Format:
Item-specific license agreed upon to submission
Description:

Collections