Repository logo
 

Advancing Responsible Recommendation Systems: Enhancing Accuracy, Diversity, and Fairness

Date

Supervisor

Li, Weihua
Bai, Quan
Yu, Jian

Item type

Thesis

Degree name

Doctor of Philosophy

Journal Title

Journal ISSN

Volume Title

Publisher

Auckland University of Technology

Abstract

With the rapid growth of personalized content delivery, recommendation systems have become integral to shaping user experiences across various platforms. However, despite their widespread use, existing RSs face persistent challenges related to accuracy, diversity, and fairness. These issues raise ethical concerns and compromise the overall performance of RSs, affecting user trust and satisfaction. Traditional recommendation systems, primarily based on collaborative filtering, content-based approaches, and more recent artificial intelligence-driven models, have indeed personalized content delivery by providing recommendations tailored to individual user preferences. However, these systems continue to deal with continuous challenges such as the cold start problem, data sparsity, over-specialization, and lack of fairness. These limitations compromise the quality of recommendations and weaken user trust and satisfaction, potentially leading to reduced user engagement. Facing these challenges, this thesis introduces three novel responsible models to enhance recommendation systems' performance and ethical standards. The Dual Observation-Based Recommendation (DOR) model is first introduced, integrating local and global observation mechanisms to enhance recommendation accuracy and diversity. This approach is particularly effective in addressing challenges such as data sparsity, cold starts, and filter bubbles. By utilizing a broader range of contextual data, the DOR model gains a deeper insight into user preferences, resulting in more precise and personalized recommendations. Additionally, the inclusion of external information helps to alleviate filter bubble issues. Compared to the DOR model, the Responsible Graph-Based Recommendation (RGRec) model emphasizes addressing the issue of filter bubbles. RGRec utilizes innovative strategies, such as belief nudging and generative AI, to ensure that users are exposed to a broader range of content, promoting engagement with diverse perspectives. This design aims to reduce the risks associated with reinforcing existing biases and to foster belief harmony among users. By enhancing content exposure, RGRec effectively mitigates the adverse effects of filter bubbles. Third, inspired by Yin-Yang theory, the Agent-Based Adaptive Information Neutrality (AAIN) model introduces a multi-agent framework that dynamically adjusts information exposure to mitigate recommendation biases and ensure a neutral, diverse recommendation environment. The proposed AAIN model adapts its recommendations by balancing different sentiment information and enhancing content diversity without compromising accuracy. Extensive experimental evaluations demonstrate that these models significantly improve RS performance regarding accuracy, diversity, and fairness. They effectively address the cold start problem, data sparsity, filter bubbles, and lack of fairness, which is critical for fostering user trust. This thesis advances the technical capabilities of recommendation systems and highlights the importance of incorporating ethical considerations into their design. By contributing to the broader discourse on responsible artificial intelligence, this thesis emphasizes the need for developing recommendation systems that prioritize ethical outcomes alongside technical performance. The development of the DOR, RGRec, and AAIN models lays a strong foundation for future recommendation systems, ensuring they evolve to enhance user trust and satisfaction while remaining technologically advanced and ethically responsible.

Description

Keywords

Source

DOI

Publisher's version

Rights statement

Collections