Influence of External Information on Large Language Models Mirrors
Social Cognitive Patterns
- URL: http://arxiv.org/abs/2305.04812v3
- Date: Fri, 20 Oct 2023 10:18:44 GMT
- Title: Influence of External Information on Large Language Models Mirrors
Social Cognitive Patterns
- Authors: Ning Bian, Hongyu Lin, Peilin Liu, Yaojie Lu, Chunkang Zhang, Ben He,
Xianpei Han, and Le Sun
- Abstract summary: Social cognitive theory explains how people learn and acquire knowledge through observing others.
Recent years have witnessed the rapid development of large language models (LLMs)
LLMs, as AI agents, can observe external information, which shapes their cognition and behaviors.
- Score: 51.622612759892775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social cognitive theory explains how people learn and acquire knowledge
through observing others. Recent years have witnessed the rapid development of
large language models (LLMs), which suggests their potential significance as
agents in the society. LLMs, as AI agents, can observe external information,
which shapes their cognition and behaviors. However, the extent to which
external information influences LLMs' cognition and behaviors remains unclear.
This study investigates how external statements and opinions influence LLMs'
thoughts and behaviors from a social cognitive perspective. Three experiments
were conducted to explore the effects of external information on LLMs'
memories, opinions, and social media behavioral decisions. Sociocognitive
factors, including source authority, social identity, and social role, were
analyzed to investigate their moderating effects. Results showed that external
information can significantly shape LLMs' memories, opinions, and behaviors,
with these changes mirroring human social cognitive patterns such as authority
bias, in-group bias, emotional positivity, and emotion contagion. This
underscores the challenges in developing safe and unbiased LLMs, and emphasizes
the importance of understanding the susceptibility of LLMs to external
influences.
Related papers
- Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - Exploring Prosocial Irrationality for LLM Agents: A Social Cognition View [21.341128731357415]
Large language models (LLMs) have been shown to face hallucination issues due to the data they trained on often containing human bias.
We propose CogMir, an open-ended Multi-LLM Agents framework that utilizes hallucination properties to assess and enhance LLM Agents' social intelligence.
arXiv Detail & Related papers (2024-05-23T16:13:33Z) - How Susceptible are Large Language Models to Ideological Manipulation? [14.598848573524549]
Large Language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information.
This raises concerns about the societal impact that could arise if the ideologies within these models can be easily manipulated.
arXiv Detail & Related papers (2024-02-18T22:36:19Z) - Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review [4.147674289030404]
Large language models (LLMs) have the potential to simulate aspects of human cognition and behavior.
LLMs offer innovative tools for literature review, hypothesis generation, experimental design, experimental subjects, data analysis, academic writing, and peer review in psychology.
There are issues like data privacy, the ethical implications of using LLMs in psychological research, and the need for a deeper understanding of these models' limitations.
arXiv Detail & Related papers (2024-01-03T03:01:29Z) - RECALL: A Benchmark for LLMs Robustness against External Counterfactual
Knowledge [69.79676144482792]
This study aims to evaluate the ability of LLMs to distinguish reliable information from external knowledge.
Our benchmark consists of two tasks, Question Answering and Text Generation, and for each task, we provide models with a context containing counterfactual information.
arXiv Detail & Related papers (2023-11-14T13:24:19Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - "Merge Conflicts!" Exploring the Impacts of External Distractors to
Parametric Knowledge Graphs [15.660128743249611]
Large language models (LLMs) acquire extensive knowledge during pre-training, known as their parametric knowledge.
LLMs inevitably require external knowledge during their interactions with users.
This raises a crucial question: How will LLMs respond when external knowledge interferes with their parametric knowledge?
arXiv Detail & Related papers (2023-09-15T17:47:59Z) - Revisiting the Reliability of Psychological Scales on Large Language
Models [66.31055885857062]
This study aims to determine the reliability of applying personality assessments to Large Language Models (LLMs)
By shedding light on the personalization of LLMs, our study endeavors to pave the way for future explorations in this field.
arXiv Detail & Related papers (2023-05-31T15:03:28Z) - Evaluating and Inducing Personality in Pre-trained Language Models [78.19379997967191]
We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
arXiv Detail & Related papers (2022-05-20T07:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.