Leveraging Large Language Models to Detect Influence Campaigns in Social
Media
- URL: http://arxiv.org/abs/2311.07816v1
- Date: Tue, 14 Nov 2023 00:25:09 GMT
- Title: Leveraging Large Language Models to Detect Influence Campaigns in Social
Media
- Authors: Luca Luceri, Eric Boniardi, Emilio Ferrara
- Abstract summary: Social media influence campaigns pose significant challenges to public discourse and democracy.
Traditional detection methods fall short due to the complexity and dynamic nature of social media.
We propose a novel detection method using Large Language Models (LLMs) that incorporates both user metadata and network structures.
- Score: 9.58546889761175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media influence campaigns pose significant challenges to public
discourse and democracy. Traditional detection methods fall short due to the
complexity and dynamic nature of social media. Addressing this, we propose a
novel detection method using Large Language Models (LLMs) that incorporates
both user metadata and network structures. By converting these elements into a
text format, our approach effectively processes multilingual content and adapts
to the shifting tactics of malicious campaign actors. We validate our model
through rigorous testing on multiple datasets, showcasing its superior
performance in identifying influence efforts. This research not only offers a
powerful tool for detecting campaigns, but also sets the stage for future
enhancements to keep up with the fast-paced evolution of social media-based
influence tactics.
Related papers
- Uncovering Agendas: A Novel French & English Dataset for Agenda Detection on Social Media [1.4999444543328293]
We present a methodology for detecting specific instances of agenda control through social media where annotated data is limited or non-existent.
By treating the task as a textual entailment problem, it is possible to overcome the requirement for a large annotated training dataset.
arXiv Detail & Related papers (2024-05-01T19:02:35Z) - Discovering Latent Themes in Social Media Messaging: A Machine-in-the-Loop Approach Integrating LLMs [22.976609127865732]
We introduce a novel approach to uncovering latent themes in social media messaging.
Our work sheds light on the dynamic nature of social media, revealing the shifts in the thematic focus of messaging in response to real-world events.
arXiv Detail & Related papers (2024-03-15T21:54:00Z) - SoMeLVLM: A Large Vision Language Model for Social Media Processing [78.47310657638567]
We introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM)
SoMeLVLM is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation.
Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks.
arXiv Detail & Related papers (2024-02-20T14:02:45Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Exposing Influence Campaigns in the Age of LLMs: A Behavioral-Based AI
Approach to Detecting State-Sponsored Trolls [8.202465737306222]
Detection of state-sponsored trolls operating in influence campaigns on social media is a critical and unsolved challenge.
We propose a new AI-based solution that identifies troll accounts solely through behavioral cues associated with their sequences of sharing activity.
arXiv Detail & Related papers (2022-10-17T07:01:17Z) - Panning for gold: Lessons learned from the platform-agnostic automated
detection of political content in textual data [48.7576911714538]
We discuss how these techniques can be used to detect political content across different platforms.
We compare the performance of three groups of detection techniques relying on dictionaries, supervised machine learning, or neural networks.
Our results show the limited impact of preprocessing on model performance, with the best results for less noisy data being achieved by neural network- and machine-learning-based models.
arXiv Detail & Related papers (2022-07-01T15:23:23Z) - Ranking Micro-Influencers: a Novel Multi-Task Learning and Interpretable
Framework [69.5850969606885]
We propose a novel multi-task learning framework to improve the state of the art in micro-influencer ranking based on multimedia content.
We show significant improvement both in terms of accuracy and model complexity.
The techniques for ranking and interpretation presented in this work can be generalised to arbitrary multimedia ranking tasks.
arXiv Detail & Related papers (2021-07-29T13:04:25Z) - Multimodal Emergent Fake News Detection via Meta Neural Process Networks [36.52739834391597]
We propose an end-to-end fake news detection framework named MetaFEND.
Specifically, the proposed model integrates meta-learning and neural process methods together.
Extensive experiments are conducted on multimedia datasets collected from Twitter and Weibo.
arXiv Detail & Related papers (2021-06-22T21:21:29Z) - Multimodal Categorization of Crisis Events in Social Media [81.07061295887172]
We present a new multimodal fusion method that leverages both images and texts as input.
In particular, we introduce a cross-attention module that can filter uninformative and misleading components from weak modalities.
We show that our method outperforms the unimodal approaches and strong multimodal baselines by a large margin on three crisis-related tasks.
arXiv Detail & Related papers (2020-04-10T06:31:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.