Social Media Influence Operations
- URL: http://arxiv.org/abs/2309.03670v1
- Date: Thu, 7 Sep 2023 12:18:07 GMT
- Title: Social Media Influence Operations
- Authors: Raphael Meier
- Abstract summary: This article reviews developments at the intersection of Large Language Models (LLMs) and influence operations.
LLMs are able to generate targeted and persuasive text which is for the most part indistinguishable from human-written content.
mitigation measures for the near future are highlighted.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media platforms enable largely unrestricted many-to-many
communication. In times of crisis, they offer a space for collective
sense-making and gave rise to new social phenomena (e.g. open-source
investigations). However, they also serve as a tool for threat actors to
conduct cyber-enabled social influence operations (CeSIOs) in order to shape
public opinion and interfere in decision-making processes. CeSIOs rely on the
employment of sock puppet accounts to engage authentic users in online
communication, exert influence, and subvert online discourse. Large Language
Models (LLMs) may further enhance the deceptive properties of sock puppet
accounts. Recent LLMs are able to generate targeted and persuasive text which
is for the most part indistinguishable from human-written content -- ideal
features for covert influence. This article reviews recent developments at the
intersection of LLMs and influence operations, summarizes LLMs' salience, and
explores the potential impact of LLM-instrumented sock puppet accounts for
CeSIOs. Finally, mitigation measures for the near future are highlighted.
Related papers
- Defending Against Social Engineering Attacks in the Age of LLMs [19.364994678178036]
Large Language Models (LLMs) can emulate human conversational patterns and facilitate chat-based social engineering (CSE) attacks.
This study investigates the dual capabilities of LLMs as both facilitators and defenders against CSE threats.
We propose ConvoSentinel, a modular defense pipeline that improves detection at both the message and the conversation levels.
arXiv Detail & Related papers (2024-06-18T04:39:40Z) - GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Applications [46.85306320942487]
Large Language Models (LLMs) are evolving to actively engage with tools and performing actions on real-world applications and services.
Today, humans verify the correctness and appropriateness of the LLM-generated outputs before putting them into real-world execution.
This poses significant challenges as code comprehension is well known to be notoriously difficult.
In this paper, we study how humans can efficiently collaborate with, delegate to, and supervise autonomous LLMs in the future.
arXiv Detail & Related papers (2024-04-10T11:17:33Z) - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative [55.08395463562242]
Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI)
Our paper explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content.
arXiv Detail & Related papers (2024-02-20T23:08:21Z) - Feedback Loops With Language Models Drive In-Context Reward Hacking [78.9830398771605]
We show that feedback loops can cause in-context reward hacking (ICRH)
We identify and study two processes that lead to ICRH: output-refinement and policy-refinement.
As AI development accelerates, the effects of feedback loops will proliferate.
arXiv Detail & Related papers (2024-02-09T18:59:29Z) - Negotiating with LLMS: Prompt Hacks, Skill Gaps, and Reasoning Deficits [1.4003044924094596]
We conduct a user study engaging over 40 individuals across all age groups in price negotiations with an LLM.
We show that the negotiated prices humans manage to achieve span a broad range, which points to a literacy gap in effectively interacting with LLMs.
arXiv Detail & Related papers (2023-11-26T08:44:58Z) - LLM-Based Agent Society Investigation: Collaboration and Confrontation
in Avalon Gameplay [57.202649879872624]
We present a novel framework designed to seamlessly adapt to Avalon gameplay.
The core of our proposed framework is a multi-agent system that enables efficient communication and interaction among agents.
Our results demonstrate the effectiveness of our framework in generating adaptive and intelligent agents.
arXiv Detail & Related papers (2023-10-23T14:35:26Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics [7.0012506428382375]
We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
arXiv Detail & Related papers (2023-08-07T05:45:17Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.