An Investigation of False Information and Its Detection
This thesis investigates the facets of false information and its detection mechanisms. Artificial Intelligence (AI) is currently employed in diverse tasks, such as social media recommendation, object detection, and speech recognition. However, it also presents challenges related to the spread of personal and misleading information. Such challenges encompass the manipulation of opinions, the birth of rumors, scaremongering, and eroding trust in governments. The methods to combat false information include fact-checking sites (which involve manual processes), automated fact-checking, and user awareness. Large Language Models (LLMs) are neural networks with over a million parameters that are pre-trained on large text datasets and fine-tuned on specific tasks such as question answering, language translation and sentiment analysis. LLMs are useful because they can be fine-tuned to classify and generate for specific tasks without requiring large datasets. We make two kinds of contributions. On the technical level, we have enhanced the accuracy of misinformation detection on a Twitter dataset using LLMs and probed the explainability of such algorithms through Explainable AI frameworks. On the knowledge level, we further investigated the societal aspects of false information issues. We examined the role of social media algorithms in amplifying false information and explored the potential of laws and regulations in addressing and mitigating the associated challenges.