Repository logo
 

Potential Unfairness Associated with the Development of Predictive Risk Models in the New Zealand Child Welfare Context

Date

Supervisor

Miranda Soberanis, Victor
Cao, Jiling

Item type

Thesis

Degree name

Doctor of Philosophy

Journal Title

Journal ISSN

Volume Title

Publisher

Auckland University of Technology

Abstract

Over the past decade, predictive risk modelling, using machine learning techniques, has gained attention in the public sector, especially in child welfare systems, where it has shown potential in supporting decision-making processes, particularly towards identifying children at risk of maltreatment and recommending interventions. For example, Allegheny County in the United States has been using the Allegheny Family Screening Tool as an assistance system to enhance child welfare call screening. This system rapidly integrates and analyzes hundreds of data elements related to individuals involved in child maltreatment allegations and produces a Family Screening Score that supports decision-making by predicting the long-term likelihood of future child welfare involvement. A significant concern, however, raised by several authors, is that poorly designed models may result in biased outcomes, disproportionately impacting specific demographic groups. In the New Zealand care and protection system, for example, the over-representation of Māori children could be unintentionally exacerbated by these models, reinforcing cycles of bias and contributing to unfair decision-making. While predictive tools in areas such as criminal recidivism and academic admissions have been widely scrutinized, the fairness of predictive models in child welfare has received far less attention. Research suggests that this is partly due to the limited availability of such tools, resulting in fewer being critically examined through the lens of algorithmic fairness. This thesis aims to address both of these gaps. By attending to concerns of fairness and predictive bias, particularly regarding ethnicity, this research investigates predictive accuracy, fairness, and disparities in risk models within the New Zealand child welfare context. Data from the Statistics NZ Integrated Data Infrastructure are utilized, and a range of machine learning algorithms, such as logistic regression, LASSO, and XGBoost, are employed for predictive modelling. Fairness metrics, such as calibration, accuracy equity, statistical parity, and equalized odds, are also explored. Following the initial evaluation, an in-processing fairness-aware machine learning approach was implemented to address observed biases, focusing specifically on reducing disparities in error rates between Māori children and children from other ethnic groups. The results extensively highlight the inherent challenges of balancing predictive accuracy with fairness. Such challenges are influenced by data linkage strategies, modelling approaches, and variations in model performance across demographic groups. Additionally, this research aims to provide critical insights into the development of fair, effective, and ethically responsible predictive models, contributing to the broader debate on how machine learning can support equitable decision-making in child welfare and beyond.

Description

Keywords

Source

DOI

Publisher's version

Rights statement

Collections