Interpretable Deep Learning Architectures for Mortality Prediction Inside the Intensive Care Unit

Date
2020
Authors
Caicedo, William
Supervisor
Gutierrez, Jairo
Parry, Dave
Item type
Thesis
Degree name
Doctor of Philosophy
Journal Title
Journal ISSN
Volume Title
Publisher
Auckland University of Technology
Abstract

To improve the performance of Intensive Care Units (ICUs), the field of bio-statistics has developed scores which try to predict the likelihood of negative outcomes. These help evaluate the effectiveness of treatments and clinical practice, and also help to identify patients with unexpected outcomes. However, they have been shown by several studies to offer sub-optimal performance. Alternatively, Deep Learning offers state of the art capabilities in certain prediction tasks and research suggests deep neural networks are able to outperform traditional techniques. Nevertheless, a main impediment for the adoption of Deep Learning in healthcare is its reduced interpretability, for in this field it is crucial to gain insight on the why of predictions, to assure that models are actually learning relevant features instead of spurious correlations. To address this, we propose two deep convolutional architectures trained for the prediction of mortality using physiological and free-text data from the Medical Information Mart for Intensive Care III (MIMIC-III), and the use of concepts from coalitional game theory to construct visual explanations aimed to show how important these inputs are deemed by the networks. Our results show our models attain state of the art performance while remaining interpretable. Supporting code can be found at https://github.com/williamcaicedo/ISeeU and https://github.com/williamcaicedo/ISeeU2.

Description
Keywords
Deep Learning , Intensive Care Unit , Mortality Prediction , Interpretability , Convolutional Neural Networks
Source
DOI
Publisher's version
Rights statement
Collections