Social Media Influence Operations
- URL: http://arxiv.org/abs/2309.03670v1
- Date: Thu, 7 Sep 2023 12:18:07 GMT
- Title: Social Media Influence Operations
- Authors: Raphael Meier
- Abstract summary: This article reviews developments at the intersection of Large Language Models (LLMs) and influence operations.
LLMs are able to generate targeted and persuasive text which is for the most part indistinguishable from human-written content.
mitigation measures for the near future are highlighted.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media platforms enable largely unrestricted many-to-many
communication. In times of crisis, they offer a space for collective
sense-making and gave rise to new social phenomena (e.g. open-source
investigations). However, they also serve as a tool for threat actors to
conduct cyber-enabled social influence operations (CeSIOs) in order to shape
public opinion and interfere in decision-making processes. CeSIOs rely on the
employment of sock puppet accounts to engage authentic users in online
communication, exert influence, and subvert online discourse. Large Language
Models (LLMs) may further enhance the deceptive properties of sock puppet
accounts. Recent LLMs are able to generate targeted and persuasive text which
is for the most part indistinguishable from human-written content -- ideal
features for covert influence. This article reviews recent developments at the
intersection of LLMs and influence operations, summarizes LLMs' salience, and
explores the potential impact of LLM-instrumented sock puppet accounts for
CeSIOs. Finally, mitigation measures for the near future are highlighted.
Related papers
- LLM Echo Chamber: personalized and automated disinformation [0.0]
Large Language Models can spread persuasive, humanlike misinformation at scale, which could influence public opinion.
This study examines these risks, focusing on LLMs ability to propagate misinformation as factual.
To investigate this, we built the LLM Echo Chamber, a controlled digital environment simulating social media chatrooms, where misinformation often spreads.
This setup, evaluated by GPT4 for persuasiveness and harmfulness, sheds light on the ethical concerns surrounding LLMs and emphasizes the need for stronger safeguards against misinformation.
arXiv Detail & Related papers (2024-09-24T17:04:12Z) - Truthful Aggregation of LLMs with an Application to Online Advertising [11.552000005640203]
We introduce MOSAIC, an auction mechanism that ensures that truthful reporting is a dominant strategy for advertisers.
We show that MOSAIC leads to high advertiser value and platform revenue with low computational overhead.
arXiv Detail & Related papers (2024-05-09T17:01:31Z) - GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Applications [46.85306320942487]
Large Language Models (LLMs) are evolving to actively engage with tools and performing actions on real-world applications and services.
Today, humans verify the correctness and appropriateness of the LLM-generated outputs before putting them into real-world execution.
This poses significant challenges as code comprehension is well known to be notoriously difficult.
In this paper, we study how humans can efficiently collaborate with, delegate to, and supervise autonomous LLMs in the future.
arXiv Detail & Related papers (2024-04-10T11:17:33Z) - The Wolf Within: Covert Injection of Malice into MLLM Societies via an MLLM Operative [55.08395463562242]
Multimodal Large Language Models (MLLMs) are constantly defining the new boundary of Artificial General Intelligence (AGI)
Our paper explores a novel vulnerability in MLLM societies - the indirect propagation of malicious content.
arXiv Detail & Related papers (2024-02-20T23:08:21Z) - Feedback Loops With Language Models Drive In-Context Reward Hacking [78.9830398771605]
We show that feedback loops can cause in-context reward hacking (ICRH)
We identify and study two processes that lead to ICRH: output-refinement and policy-refinement.
As AI development accelerates, the effects of feedback loops will proliferate.
arXiv Detail & Related papers (2024-02-09T18:59:29Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Quantifying the Impact of Large Language Models on Collective Opinion
Dynamics [7.0012506428382375]
We create an opinion network dynamics model to encode the opinions of large language models (LLMs)
The results suggest that the output opinion of LLMs has a unique and positive effect on the collective opinion difference.
Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output.
arXiv Detail & Related papers (2023-08-07T05:45:17Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.