Repository logo
 

Artificial Intelligence in Automation of Community Disaster Resilience Measurement

Date

Supervisor

Poshdar, Mani
Babaeian Jelodar, Mostafa
Tookey, John

Item type

Thesis

Degree name

Doctor of Philosophy

Journal Title

Journal ISSN

Volume Title

Publisher

Auckland University of Technology

Abstract

Over time, numerous frameworks have emerged to gauge a community's resilience, which is the community’s ability to get back to function, in the face of disasters. These frameworks vary in complexity and scope, often encompassing both quantitative and qualitative metrics. The dichotomy between quantitative and qualitative measurements underscores a critical limitation in current resilience frameworks. While quantitative metrics excel in providing tangible data points and statistical analyses, they often overlook the intricate social dynamics and cultural factors that profoundly influence a community's resilience. In contrast, qualitative assessments offer a more holistic understanding by capturing the lived experiences, perceptions, and narratives of community members. This qualitative data, comprising approximately 80 percent of the information relevant to resilience, provides invaluable insights into the adaptive strategies, cultural norms, and social networks that shape a community's capacity to withstand and recover from disasters. The pursuit of capturing richer data through qualitative methods, particularly open-ended interviews, stands as a cornerstone of this study. By delving into the nuanced perspectives and experiences of individuals, such methods offer a depth of understanding that quantitative approaches often struggle to achieve. However, despite their potential to yield valuable insights, qualitative methods are not without their limitations. One significant challenge inherent to open-ended interviews is the potential for inconsistency of capturing community’s resilience. Unlike structured surveys or questionnaires, which provide standardized prompts and response options, open-ended interviews allow participants to express themselves freely. While this flexibility can unearth unexpected insights and perspectives, it can also lead to variability in results, making it difficult to establish clear patterns or draw definitive conclusions. This inconsistency can stem from variations in interviewers' probing techniques causing the interview to follow a different direction. Another limitation of open-ended interviews is the risk of bias. Human subjectivity inevitably influences every stage of data collection in an interview. In the context of open-ended interviews, bias may manifest in various forms and types, which in this study, gender bias of interviewer is targeted. Interviewers' preconceived notions or personal beliefs can inadvertently shape the direction of the interview, influencing the topics explored and the interpretation of responses. Another significant aspect is the integration of automation and repeatability into the process of conducting disaster resilience interviews, particularly in mitigating the impact of variables such as inconsistency and gender bias. Automation streamlines data collection and analysis, reducing the potential for human error and enhancing the efficiency of the research endeavor. This standardization is crucial for mitigating inconsistencies in responses, as it promotes uniformity across interviews and facilitates the comparison of findings. Additionally, automation brings validity by ensuring interviews can be streamlined in an accurate repeatable process. In tackling the challenges in obtaining holistic data from open-ended interviews, this research adopts a systematic approach comprising five distinct steps. The steps involved identifying a practical measurement approach to effectively quantify inconsistency and gender bias, and then addressing these issues. In the final step, an automation approach is developed to not only assist interviews in maintaining consistency and impartiality but also enable the process to be repeatable. Building upon these foundational insights, the research proceeds to devise innovative solutions aimed at mitigating the impact of these variables on data collection. Methods of measurement were developed through the utilization of simulated interviews to generate numerical representations. Besides, by leveraging the latest advancements in Artificial Intelligence (AI), novel solutions to address key variables are identified through a structured three-phase design encompassing content analysis, exploratory analysis, and comparative analysis. The AI-driven methods also pave the way for the automation in conducting open-ended interviews. The variable of inconsistency was quantified using the Interview’s Inconsistency Mark (IIM), ranging from 0 to 13, where higher scores denoted increased inconsistency levels. To mitigate this issue, a solution was devised through natural language processing techniques, specifically sentence embedding. This method retrieved consistent information from a knowledge base housing peer-reviewed papers, resulting in significantly reduced inconsistency levels compared to the benchmark interviews, with observed values of 5.13 and 1.35. Gender bias was assessed using the Practical Measurement of Gender Bias (PMGB) index, represented as a percentage, where higher values indicated greater bias. An approach was developed employing natural language processing methodologies, particularly word embedding, to identify gender-sensitive language. Utilizing a deep learning model named Claude 3 Sonnet helped in replacing gender-sensitive terms with neutral gender equivalents. Consequently, the solution successfully eliminated gender-sensitive values, reflected by a PMGB of zero, in contrast to simulated interviews yielding higher values of 12% and 10%. Automation was achieved by developing a Decision Support System integrating both inconsistency and gender bias resolution components. Additionally, two complementary components were included: a speech recognition system modeled after SpeechStew for input reception and a follow-up question generator based on Claude 3 Sonnet. This integrated system enables the automation of open-ended interviews, promising high-quality outcomes based on predefined metrics. Overall, this thesis contributes significantly to the advancement of knowledge in disaster resilience for both existing and future frameworks by offering novel insights and perspectives on data collection. While the study encountered several limitations such as the lack of transparency in existing frameworks utilizing open-ended interviews for data collection, particularly within New Zealand and the challenges in automating aspects of the data collection process, these constraints underscore the indispensable role of human engagement and qualitative insight in certain research contexts. Furthermore, financial barriers associated with testing and utilizing certain AI models were identified. However, with upcoming advancements in AI, this study provides a robust foundation for future enhancements. Integrating the data collection solution into existing and upcoming frameworks, along with longitudinal observations, will enable future studies to gain better insights. By rigorously applying the solution in additional real-world scenarios, its performance can be more comprehensively evaluated, allowing future researchers to fine-tune it to address specific needs and improve its efficacy.

Description

Keywords

Source

DOI

Publisher's version

Rights statement

Collections