ElecTwit: A Framework for Studying Persuasion in Multi-Agent Social Systems
- URL: http://arxiv.org/abs/2601.00994v1
- Date: Fri, 02 Jan 2026 22:10:09 GMT
- Title: ElecTwit: A Framework for Studying Persuasion in Multi-Agent Social Systems
- Authors: Michael Bao,
- Abstract summary: ElecTwit is a simulation framework designed to study persuasion within multi-agent systems.<n>We observed the comprehensive use of 25 specific persuasion techniques across most tested LLMs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces ElecTwit, a simulation framework designed to study persuasion within multi-agent systems, specifically emulating the interactions on social media platforms during a political election. By grounding our experiments in a realistic environment, we aimed to overcome the limitations of game-based simulations often used in prior research. We observed the comprehensive use of 25 specific persuasion techniques across most tested LLMs, encompassing a wider range than previously reported. The variations in technique usage and overall persuasion output between models highlight how different model architectures and training can impact the dynamics in realistic social simulations. Additionally, we observed unique phenomena such as "kernel of truth" messages and spontaneous developments with an "ink" obsession, where agents collectively demanded written proof. Our study provides a foundation for evaluating persuasive LLM agents in real-world contexts, ensuring alignment and preventing dangerous outcomes.
Related papers
- Simulating Students with Large Language Models: A Review of Architecture, Mechanisms, and Role Modelling in Education with Generative AI [0.8703455323398351]
Review of studies using large language models (LLMs) to simulate student behaviour across educational environments.<n>Wee current evidence on the capacity of LLM-based agents to emulate learner archetypes, respond to instructional inputs, and interact within multi-agent classroom scenarios.<n>We examine the implications of such systems for curriculum development, instructional evaluation, and teacher training.
arXiv Detail & Related papers (2025-11-08T17:23:13Z) - Leveraging LLM-based agents for social science research: insights from citation network simulations [132.4334196445918]
We introduce the CiteAgent framework, designed to generate citation networks based on human-behavior simulation.<n>CiteAgent captures predominant phenomena in real-world citation networks, including power-law distribution, citational distortion, and shrinking diameter.<n>We establish two LLM-based research paradigms in social science, allowing us to validate and challenge existing theories.
arXiv Detail & Related papers (2025-11-05T08:47:04Z) - DeceptionBench: A Comprehensive Benchmark for AI Deception Behaviors in Real-world Scenarios [57.327907850766785]
characterization of deception across realistic real-world scenarios remains underexplored.<n>We establish DeceptionBench, the first benchmark that systematically evaluates how deceptive tendencies manifest across different domains.<n>On the intrinsic dimension, we explore whether models exhibit self-interested egoistic tendencies or sycophantic behaviors that prioritize user appeasement.<n>We incorporate sustained multi-turn interaction loops to construct a more realistic simulation of real-world feedback dynamics.
arXiv Detail & Related papers (2025-10-17T10:14:26Z) - Reimagining Agent-based Modeling with Large Language Model Agents via Shachi [16.625794969005966]
The study of emergent behaviors in large language model (LLM)-driven multi-agent systems is a critical research challenge.<n>We introduce Shachi, a formal methodology and modular framework that decomposes an agent's policy into core cognitive components.<n>We validate our methodology on a comprehensive 10-task benchmark and demonstrate its power through novel scientific inquiries.
arXiv Detail & Related papers (2025-09-26T04:38:59Z) - Disagreements in Reasoning: How a Model's Thinking Process Dictates Persuasion in Multi-Agent Systems [49.69773210844221]
This paper challenges the prevailing hypothesis that persuasive efficacy is primarily a function of model scale.<n>Through a series of multi-agent persuasion experiments, we uncover a fundamental trade-off we term the Persuasion Duality.<n>Our findings reveal that the reasoning process in LRMs exhibits significantly greater resistance to persuasion, maintaining their initial beliefs more robustly.
arXiv Detail & Related papers (2025-09-25T12:03:10Z) - LLM-based Agentic Reasoning Frameworks: A Survey from Methods to Scenarios [63.08653028889316]
We propose a systematic taxonomy that decomposes agentic reasoning frameworks and analyze how these frameworks dominate framework-level reasoning.<n>Specifically, we propose an unified formal language to further classify agentic reasoning systems into single-agent methods, tool-based methods, and multi-agent methods.<n>We provide a comprehensive review of their key application scenarios in scientific discovery, healthcare, software engineering, social simulation, and economics.
arXiv Detail & Related papers (2025-08-25T06:01:16Z) - DICE: Dynamic In-Context Example Selection in LLM Agents via Efficient Knowledge Transfer [50.64531021352504]
Large language model-based agents, empowered by in-context learning (ICL), have demonstrated strong capabilities in complex reasoning and tool-use tasks.<n>Existing approaches typically rely on example selection, including in some agentic or multi-step settings.<n>We propose DICE, a theoretically grounded ICL framework for agentic tasks that selects the most relevant demonstrations at each step of reasoning.
arXiv Detail & Related papers (2025-07-31T13:42:14Z) - Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges [0.7739037410679168]
The paper reviews recent findings on the ability of Large Language Models to replicate key aspects of human cognition.<n>The second part surveys emerging applications of LLMs in multi-agent simulation frameworks.<n>The paper concludes by advocating for hybrid approaches that integrate LLMs into traditional agent-based modeling platforms.
arXiv Detail & Related papers (2025-07-25T15:15:35Z) - Large Language Model Driven Agents for Simulating Echo Chamber Formation [5.6488384323017]
The rise of echo chambers on social media platforms has heightened concerns about polarization and the reinforcement of existing beliefs.<n>Traditional approaches for simulating echo chamber formation have often relied on predefined rules and numerical simulations.<n>We present a novel framework that leverages large language models (LLMs) as generative agents to simulate echo chamber dynamics.
arXiv Detail & Related papers (2025-02-25T12:05:11Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Designing Domain-Specific Large Language Models: The Critical Role of Fine-Tuning in Public Opinion Simulation [0.0]
This paper introduces a novel fine-tuning approach that integrates socio-demographic data from the UK Household Longitudinal Study.<n>By emulating diverse synthetic profiles, the fine-tuned models significantly outperform pre-trained counterparts.<n>Its broader implications include deploying LLMs in domains like healthcare and education, fostering inclusive and data-driven decision-making.
arXiv Detail & Related papers (2024-09-28T10:39:23Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, a framework for better data construction and model tuning.<n>For insufficient data usage, we incorporate strategies such as Chain-of-Thought prompting and anti-induction.<n>For rigid behavior patterns, we design the tuning process and introduce automated DPO to enhance the specificity and dynamism of the models' personalities.
arXiv Detail & Related papers (2024-07-17T08:13:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.