Susceptibility to Influence of Large Language Models
- URL: http://arxiv.org/abs/2303.06074v1
- Date: Fri, 10 Mar 2023 16:53:30 GMT
- Title: Susceptibility to Influence of Large Language Models
- Authors: Lewis D Griffin, Bennett Kleinberg, Maximilian Mozes, Kimberly T Mai,
Maria Vau, Matthew Caldwell and Augustine Marvor-Parker
- Abstract summary: Two studies tested the hypothesis that a Large Language Model (LLM) can be used to model psychological change following exposure to influential input.
The first study tested a generic mode of influence - the Illusory Truth Effect (ITE)
The second study concerns a specific mode of influence - populist framing of news to increase its persuasion and political mobilization.
- Score: 5.931099001882958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Two studies tested the hypothesis that a Large Language Model (LLM) can be
used to model psychological change following exposure to influential input. The
first study tested a generic mode of influence - the Illusory Truth Effect
(ITE) - where earlier exposure to a statement (through, for example, rating its
interest) boosts a later truthfulness test rating. Data was collected from 1000
human participants using an online experiment, and 1000 simulated participants
using engineered prompts and LLM completion. 64 ratings per participant were
collected, using all exposure-test combinations of the attributes: truth,
interest, sentiment and importance. The results for human participants
reconfirmed the ITE, and demonstrated an absence of effect for attributes other
than truth, and when the same attribute is used for exposure and test. The same
pattern of effects was found for LLM-simulated participants. The second study
concerns a specific mode of influence - populist framing of news to increase
its persuasion and political mobilization. Data from LLM-simulated participants
was collected and compared to previously published data from a 15-country
experiment on 7286 human participants. Several effects previously demonstrated
from the human study were replicated by the simulated study, including effects
that surprised the authors of the human study by contradicting their
theoretical expectations (anti-immigrant framing of news decreases its
persuasion and mobilization); but some significant relationships found in human
data (modulation of the effectiveness of populist framing according to relative
deprivation of the participant) were not present in the LLM data. Together the
two studies support the view that LLMs have potential to act as models of the
effect of influence.
Related papers
- A Debate-Driven Experiment on LLM Hallucinations and Accuracy [7.821303946741665]
This study investigates the phenomenon of hallucination in large language models (LLMs)
Multiple instances of GPT-4o-Mini models engage in a debate-like interaction prompted with questions from the TruthfulQA dataset.
One model is deliberately instructed to generate plausible but false answers while the other models are asked to respond truthfully.
arXiv Detail & Related papers (2024-10-25T11:41:27Z) - AI Can Be Cognitively Biased: An Exploratory Study on Threshold Priming in LLM-Based Batch Relevance Assessment [37.985947029716016]
Large language models (LLMs) have shown advanced understanding capabilities but may inherit human biases from their training data.
We investigated whether LLMs are influenced by the threshold priming effect in relevance judgments.
arXiv Detail & Related papers (2024-09-24T12:23:15Z) - Using Large Language Models to Create AI Personas for Replication and Prediction of Media Effects: An Empirical Test of 133 Published Experimental Research Findings [0.3749861135832072]
This report analyzes the potential for large language models (LLMs) to expedite accurate replication of message effects studies.
We tested LLM-powered participants by replicating 133 experimental findings from 14 papers containing 45 recent studies in the Journal of Marketing.
Our LLM replications successfully reproduced 76% of the original main effects (84 out of 111), demonstrating strong potential for AI-assisted replication of studies in which people respond to media stimuli.
arXiv Detail & Related papers (2024-08-28T18:14:39Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities for critical thinking in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - Exploring Value Biases: How LLMs Deviate Towards the Ideal [57.99044181599786]
Large-Language-Models (LLMs) are deployed in a wide range of applications, and their response has an increasing social impact.
We show that value bias is strong in LLMs across different categories, similar to the results found in human studies.
arXiv Detail & Related papers (2024-02-16T18:28:43Z) - Have Learning Analytics Dashboards Lived Up to the Hype? A Systematic
Review of Impact on Students' Achievement, Motivation, Participation and
Attitude [0.0]
There is no evidence to support the conclusion that learning analytics dashboards (LADs) have lived up to the promise of improving academic achievement.
LADs showed a relatively substantial impact on student participation.
To advance the research line for LADs, researchers should use rigorous assessment methods and establish clear standards for evaluating learning constructs.
arXiv Detail & Related papers (2023-12-22T20:12:52Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Do LLMs exhibit human-like response biases? A case study in survey
design [66.1850490474361]
We investigate the extent to which large language models (LLMs) reflect human response biases, if at all.
We design a dataset and framework to evaluate whether LLMs exhibit human-like response biases in survey questionnaires.
Our comprehensive evaluation of nine models shows that popular open and commercial LLMs generally fail to reflect human-like behavior.
arXiv Detail & Related papers (2023-11-07T15:40:43Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Do Large Language Models Show Decision Heuristics Similar to Humans? A
Case Study Using GPT-3.5 [0.0]
GPT-3.5 is an example of an LLM that supports a conversational agent called ChatGPT.
In this work, we used a series of novel prompts to determine whether ChatGPT shows biases, and other decision effects.
We also tested the same prompts on human participants.
arXiv Detail & Related papers (2023-05-08T01:02:52Z) - Fair Effect Attribution in Parallel Online Experiments [57.13281584606437]
A/B tests serve the purpose of reliably identifying the effect of changes introduced in online services.
It is common for online platforms to run a large number of simultaneous experiments by splitting incoming user traffic randomly.
Despite a perfect randomization between different groups, simultaneous experiments can interact with each other and create a negative impact on average population outcomes.
arXiv Detail & Related papers (2022-10-15T17:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.