Large Language Models as Superpositions of Cultural Perspectives
- URL: http://arxiv.org/abs/2307.07870v3
- Date: Tue, 7 Nov 2023 16:28:33 GMT
- Title: Large Language Models as Superpositions of Cultural Perspectives
- Authors: Grgur Kova\v{c}, Masataka Sawayama, R\'emy Portelas, C\'edric Colas,
Peter Ford Dominey, Pierre-Yves Oudeyer
- Abstract summary: Large Language Models (LLMs) are often misleadingly recognized as having a personality or a set of values.
We argue that an LLM can be seen as a superposition of perspectives with different values and personality traits.
- Score: 25.114678091641935
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) are often misleadingly recognized as having a
personality or a set of values. We argue that an LLM can be seen as a
superposition of perspectives with different values and personality traits.
LLMs exhibit context-dependent values and personality traits that change based
on the induced perspective (as opposed to humans, who tend to have more
coherent values and personality traits across contexts). We introduce the
concept of perspective controllability, which refers to a model's affordance to
adopt various perspectives with differing values and personality traits. In our
experiments, we use questionnaires from psychology (PVQ, VSM, IPIP) to study
how exhibited values and personality traits change based on different
perspectives. Through qualitative experiments, we show that LLMs express
different values when those are (implicitly or explicitly) implied in the
prompt, and that LLMs express different values even when those are not
obviously implied (demonstrating their context-dependent nature). We then
conduct quantitative experiments to study the controllability of different
models (GPT-4, GPT-3.5, OpenAssistant, StableVicuna, StableLM), the
effectiveness of various methods for inducing perspectives, and the smoothness
of the models' drivability. We conclude by examining the broader implications
of our work and outline a variety of associated scientific questions. The
project website is available at
https://sites.google.com/view/llm-superpositions .
Related papers
- Are Large Language Models Chameleons? [1.5727456947901746]
We show that the effect of prompts on bias and variability is fundamental, highlighting major cultural, age, and gender biases.
It is important to analyze the robustness and variability of prompts before using LLMs to model individual decisions or collective behavior.
arXiv Detail & Related papers (2024-05-29T17:54:22Z) - Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models [61.45529177682614]
We challenge the prevailing constrained evaluation paradigm for values and opinions in large language models.
We show that models give substantively different answers when not forced.
We distill these findings into recommendations and open challenges in evaluating values and opinions in LLMs.
arXiv Detail & Related papers (2024-02-26T18:00:49Z) - Eliciting Personality Traits in Large Language Models [0.0]
Large Language Models (LLMs) are increasingly being utilized by both candidates and employers in the recruitment context.
This study seeks to obtain a better understanding of such models by examining their output variations based on different input prompts.
arXiv Detail & Related papers (2024-02-13T10:09:00Z) - Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs [50.77984109941538]
Our research reveals that the visual capabilities in recent multimodal LLMs still exhibit systematic shortcomings.
We identify ''CLIP-blind pairs'' - images that CLIP perceives as similar despite their clear visual differences.
We evaluate various CLIP-based vision-and-language models and found a notable correlation between visual patterns that challenge CLIP models and those problematic for multimodal LLMs.
arXiv Detail & Related papers (2024-01-11T18:58:36Z) - Illuminating the Black Box: A Psychometric Investigation into the
Multifaceted Nature of Large Language Models [3.692410936160711]
This study explores the idea of AI Personality or AInality suggesting that Large Language Models (LLMs) exhibit patterns similar to human personalities.
Using projective tests, we uncover hidden aspects of LLM personalities that are not easily accessible through direct questioning.
Our machine learning analysis revealed that LLMs exhibit distinct AInality traits and manifest diverse personality types, demonstrating dynamic shifts in response to external instructions.
arXiv Detail & Related papers (2023-12-21T04:57:21Z) - How Far Can We Extract Diverse Perspectives from Large Language Models? [17.66104821305835]
We investigate Large Language Models' capacity for generating diverse perspectives on subjective topics.
Motivated by how humans develop their opinions through their values, we propose a criteria-based prompting technique.
We find that LLMs can generate diverse opinions according to the degree of task subjectivity.
arXiv Detail & Related papers (2023-11-16T11:23:38Z) - On the steerability of large language models toward data-driven personas [98.9138902560793]
Large language models (LLMs) are known to generate biased responses where the opinions of certain groups and populations are underrepresented.
Here, we present a novel approach to achieve controllable generation of specific viewpoints using LLMs.
arXiv Detail & Related papers (2023-11-08T19:01:13Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct a new benchmark dataset PersonalityEdit to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - Revisiting the Reliability of Psychological Scales on Large Language
Models [66.31055885857062]
This study aims to determine the reliability of applying personality assessments to Large Language Models (LLMs)
By shedding light on the personalization of LLMs, our study endeavors to pave the way for future explorations in this field.
arXiv Detail & Related papers (2023-05-31T15:03:28Z) - Evaluating and Inducing Personality in Pre-trained Language Models [78.19379997967191]
We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
arXiv Detail & Related papers (2022-05-20T07:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.