Evaluating and Inducing Personality in Pre-trained Language Models
- URL: http://arxiv.org/abs/2206.07550v3
- Date: Sun, 29 Oct 2023 04:39:25 GMT
- Title: Evaluating and Inducing Personality in Pre-trained Language Models
- Authors: Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang,
Yixin Zhu
- Abstract summary: We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
- Score: 78.19379997967191
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Standardized and quantified evaluation of machine behaviors is a crux of
understanding LLMs. In this study, we draw inspiration from psychometric
studies by leveraging human personality theory as a tool for studying machine
behaviors. Originating as a philosophical quest for human behaviors, the study
of personality delves into how individuals differ in thinking, feeling, and
behaving. Toward building and understanding human-like social machines, we are
motivated to ask: Can we assess machine behaviors by leveraging human
psychometric tests in a principled and quantitative manner? If so, can we
induce a specific personality in LLMs? To answer these questions, we introduce
the Machine Personality Inventory (MPI) tool for studying machine behaviors;
MPI follows standardized personality tests, built upon the Big Five Personality
Factors (Big Five) theory and personality assessment inventories. By
systematically evaluating LLMs with MPI, we provide the first piece of evidence
demonstrating the efficacy of MPI in studying LLMs behaviors. We further devise
a Personality Prompting (P^2) method to induce LLMs with specific personalities
in a controllable way, capable of producing diverse and verifiable behaviors.
We hope this work sheds light on future studies by adopting personality as the
essential indicator for various downstream tasks, and could further motivate
research into equally intriguing human-like machine behaviors.
Related papers
- Do LLM Personas Dream of Bull Markets? Comparing Human and AI Investment Strategies Through the Lens of the Five-Factor Model [0.3495246564946556]
Large Language Models (LLMs) have demonstrated the ability to adopt a personality and behave in a human-like manner.
This study investigated whether an LLM persona with a specific Big Five personality profile would perform an investment task similarly to a human with the same personality traits.
We found that LLMs are able to generalise traits into expected behaviours in three areas: learning style, impulsivity and risk appetite.
arXiv Detail & Related papers (2024-10-28T02:50:41Z) - LMLPA: Language Model Linguistic Personality Assessment [11.599282127259736]
Large Language Models (LLMs) are increasingly used in everyday life and research.
measuring the personality of a given LLM is currently a challenge.
This paper introduces the Language Model Linguistic Personality Assessment (LMLPA), a system designed to evaluate the linguistic personalities of LLMs.
arXiv Detail & Related papers (2024-10-23T07:48:51Z) - Neuron-based Personality Trait Induction in Large Language Models [115.08894603023712]
Large language models (LLMs) have become increasingly proficient at simulating various personality traits.
We present a neuron-based approach for personality trait induction in LLMs.
arXiv Detail & Related papers (2024-10-16T07:47:45Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - Is Cognition and Action Consistent or Not: Investigating Large Language
Model's Personality [12.162460438332152]
We investigate the reliability of Large Language Models (LLMs) in professing human-like personality traits through responses to personality questionnaires.
Our goal is to evaluate the consistency between LLMs' professed personality inclinations and their actual "behavior"
We propose hypotheses for the observed results based on psychological theories and metrics.
arXiv Detail & Related papers (2024-02-22T16:32:08Z) - Illuminating the Black Box: A Psychometric Investigation into the
Multifaceted Nature of Large Language Models [3.692410936160711]
This study explores the idea of AI Personality or AInality suggesting that Large Language Models (LLMs) exhibit patterns similar to human personalities.
Using projective tests, we uncover hidden aspects of LLM personalities that are not easily accessible through direct questioning.
Our machine learning analysis revealed that LLMs exhibit distinct AInality traits and manifest diverse personality types, demonstrating dynamic shifts in response to external instructions.
arXiv Detail & Related papers (2023-12-21T04:57:21Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Revisiting the Reliability of Psychological Scales on Large Language Models [62.57981196992073]
This study aims to determine the reliability of applying personality assessments to Large Language Models.
Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory.
arXiv Detail & Related papers (2023-05-31T15:03:28Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.