Repository logo
 

BotDMM: Dual-Channel Multi-Modal Learning for LLM-Driven Bot Detection on Social Media

Supervisor

Item type

Journal Article

Degree name

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier BV

Abstract

Social bot is becoming a growing concern due to their ability to spread misinformation and manipulate public discourse. The emergence of powerful large language models (LLMs), such as ChatGPT, has introduced a new generation of bots capable of producing fluent and human-like text while dynamically adapting their relational patterns over time. These LLM-driven bots seamlessly blend into online communities, making them significantly more challenging to detect. Most existing approaches rely on static features or simple behavioral patterns, which are not effective against bots that can evolve both their language and their network connections. To address these challenges, we propose a novel Dual-channel Multi-Modal learning (BotDMM) framework for LLM-driven bot detection. The proposed model captures discriminative information from two complementary sources: users’ content features (including their profiles and temporal posting behavior) and structural features (reflecting local network topology). Furthermore, we employ a joint training approach that incorporates two carefully designed self-supervised learning paradigms alongside the primary prediction task to enhance discrimination between human users, traditional bots, and LLM-driven bots. Extensive experiments demonstrate the effectiveness and superiority of BotDMM compared to state-of-the-art baselines.

Description

Source

Information Fusion, ISSN: 1566-2535 (Print), Elsevier BV, 103758-103758. doi: 10.1016/j.inffus.2025.103758

Rights statement