A Hybrid Theory and Data-driven Approach to Persuasion Detection with Large Language Models
- URL: http://arxiv.org/abs/2511.22109v1
- Date: Thu, 27 Nov 2025 04:59:52 GMT
- Title: A Hybrid Theory and Data-driven Approach to Persuasion Detection with Large Language Models
- Authors: Gia Bao Hoang, Keith J Ransom, Rachel Stephens, Carolyn Semmler, Nicolas Fay, Lewis Mitchell,
- Abstract summary: We develop a model that predicts successful persuasion using features derived from psychological experiments.<n>Our findings provide insights into the characteristics of persuasive messages.<n>This work has broader applications in fields such as online influence detection and misinformation mitigation.
- Score: 0.9236074230806578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional psychological models of belief revision focus on face-to-face interactions, but with the rise of social media, more effective models are needed to capture belief revision at scale, in this rich text-based online discourse. Here, we use a hybrid approach, utilizing large language models (LLMs) to develop a model that predicts successful persuasion using features derived from psychological experiments. Our approach leverages LLM generated ratings of features previously examined in the literature to build a random forest classification model that predicts whether a message will result in belief change. Of the eight features tested, \textit{epistemic emotion} and \textit{willingness to share} were the top-ranking predictors of belief change in the model. Our findings provide insights into the characteristics of persuasive messages and demonstrate how LLMs can enhance models of successful persuasion based on psychological theory. Given these insights, this work has broader applications in fields such as online influence detection and misinformation mitigation, as well as measuring the effectiveness of online narratives.
Related papers
- ElecTwit: A Framework for Studying Persuasion in Multi-Agent Social Systems [0.0]
ElecTwit is a simulation framework designed to study persuasion within multi-agent systems.<n>We observed the comprehensive use of 25 specific persuasion techniques across most tested LLMs.
arXiv Detail & Related papers (2026-01-02T22:10:09Z) - MMPersuade: A Dataset and Evaluation Framework for Multimodal Persuasion [73.99171322670772]
Large Vision-Language Models (LVLMs) are increasingly deployed in domains such as shopping, health, and news.<n> MMPersuade provides a unified framework for systematically studying multimodal persuasion dynamics in LVLMs.
arXiv Detail & Related papers (2025-10-26T17:39:21Z) - Disagreements in Reasoning: How a Model's Thinking Process Dictates Persuasion in Multi-Agent Systems [49.69773210844221]
This paper challenges the prevailing hypothesis that persuasive efficacy is primarily a function of model scale.<n>Through a series of multi-agent persuasion experiments, we uncover a fundamental trade-off we term the Persuasion Duality.<n>Our findings reveal that the reasoning process in LRMs exhibits significantly greater resistance to persuasion, maintaining their initial beliefs more robustly.
arXiv Detail & Related papers (2025-09-25T12:03:10Z) - Conceptual Contrastive Edits in Textual and Vision-Language Retrieval [1.8591405259852054]
We employ post-hoc conceptual contrastive edits to expose noteworthy patterns and biases imprinted in representations of retrieval models.<n>We apply these edits to explain both linguistic and visiolinguistic pre-trained models in a black-box manner.<n>We also introduce a novel metric to assess the per-word impact of contrastive interventions on model outcomes.
arXiv Detail & Related papers (2025-03-01T10:14:28Z) - Among Them: A game-based framework for assessing persuasion capabilities of LLMs [0.8763629723457529]
Large language models (LLMs) and autonomous AI agents have raised concerns about their potential for automated persuasion and social influence.<n>We present an Among Us-inspired game framework for assessing LLM deception skills in a controlled environment.
arXiv Detail & Related papers (2025-02-27T12:26:21Z) - Synthetic Social Media Influence Experimentation via an Agentic Reinforcement Learning Large Language Model Bot [7.242974711907219]
This study provides a novel simulated environment that combines agentic intelligence with Large Language Models (LLMs) to test topic-specific influence mechanisms.<n>Our framework contains agents that generate posts, form opinions on specific topics, and socially follow/unfollow each other based on the outcome of discussions.
arXiv Detail & Related papers (2024-11-29T11:37:12Z) - Trustworthy Alignment of Retrieval-Augmented Large Language Models via Reinforcement Learning [84.94709351266557]
We focus on the trustworthiness of language models with respect to retrieval augmentation.
We deem that retrieval-augmented language models have the inherent capabilities of supplying response according to both contextual and parametric knowledge.
Inspired by aligning language models with human preference, we take the first step towards aligning retrieval-augmented language models to a status where it responds relying merely on the external evidence.
arXiv Detail & Related papers (2024-10-22T09:25:21Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, a framework for better data construction and model tuning.<n>For insufficient data usage, we incorporate strategies such as Chain-of-Thought prompting and anti-induction.<n>For rigid behavior patterns, we design the tuning process and introduce automated DPO to enhance the specificity and dynamism of the models' personalities.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - What if...?: Thinking Counterfactual Keywords Helps to Mitigate Hallucination in Large Multi-modal Models [50.97705264224828]
We propose Counterfactual Inception, a novel method that implants counterfactual thinking into Large Multi-modal Models.
We aim for the models to engage with and generate responses that span a wider contextual scene understanding.
Comprehensive analyses across various LMMs, including both open-source and proprietary models, corroborate that counterfactual thinking significantly reduces hallucination.
arXiv Detail & Related papers (2024-03-20T11:27:20Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Interpretable Fake News Detection with Topic and Deep Variational Models [2.15242029196761]
We focus on fake news detection using interpretable features and methods.
We have developed a deep probabilistic model that integrates a dense representation of textual news.
Our model achieves comparable performance to state-of-the-art competing models.
arXiv Detail & Related papers (2022-09-04T05:31:00Z) - The drivers of online polarization: fitting models to data [0.0]
echo chamber effect and opinion polarization may be driven by several factors including human biases in information consumption and personalized recommendations produced by feed algorithms.
Until now, studies have mainly used opinion dynamic models to explore the mechanisms behind the emergence of polarization and echo chambers.
We provide a method to numerically compare the opinion distributions obtained from simulations with those measured on social media.
arXiv Detail & Related papers (2022-05-31T17:00:41Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Learning Opinion Dynamics From Social Traces [25.161493874783584]
We propose an inference mechanism for fitting a generative, agent-like model of opinion dynamics to real-world social traces.
We showcase our proposal by translating a classical agent-based model of opinion dynamics into its generative counterpart.
We apply our model to real-world data from Reddit to explore the long-standing question about the impact of backfire effect.
arXiv Detail & Related papers (2020-06-02T14:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.