LLMs Simulate Big Five Personality Traits: Further Evidence
- URL: http://arxiv.org/abs/2402.01765v1
- Date: Wed, 31 Jan 2024 13:45:25 GMT
- Title: LLMs Simulate Big Five Personality Traits: Further Evidence
- Authors: Aleksandra Sorokovikova, Natalia Fedorova, Sharwin Rezagholi, Ivan P.
Yamshchikov
- Abstract summary: We analyze the personality traits simulated by Llama2, GPT4, and Mixtral.
This contributes to the broader understanding of the capabilities of LLMs to simulate personality traits.
- Score: 51.13560635563004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An empirical investigation into the simulation of the Big Five personality
traits by large language models (LLMs), namely Llama2, GPT4, and Mixtral, is
presented. We analyze the personality traits simulated by these models and
their stability. This contributes to the broader understanding of the
capabilities of LLMs to simulate personality traits and the respective
implications for personalized human-computer interaction.
Related papers
- Exploring the Potential of Large Language Models to Simulate Personality [39.58317527488534]
We aim to simulate personal traits according to the Big Five model with the use of large language models (LLMs)
We present a dataset of generated texts with the predefined Big Five characteristics and provide an analytical framework for testing LLMs on a simulation of personality skills.
arXiv Detail & Related papers (2025-02-12T10:17:18Z) - Evaluating Personality Traits in Large Language Models: Insights from Psychological Questionnaires [3.6001840369062386]
This work applies psychological tools to Large Language Models in diverse scenarios to generate personality profiles.
Our findings reveal that LLMs exhibit unique traits, varying characteristics, and distinct personality profiles even within the same family of models.
arXiv Detail & Related papers (2025-02-07T16:12:52Z) - OpenCharacter: Training Customizable Role-Playing LLMs with Large-Scale Synthetic Personas [65.83634577897564]
This study explores a large-scale data synthesis approach to equip large language models with character generalization capabilities.
We begin by synthesizing large-scale character profiles using personas from Persona Hub.
We then explore two strategies: response rewriting and response generation, to create character-aligned instructional responses.
arXiv Detail & Related papers (2025-01-26T07:07:01Z) - Neuron-based Personality Trait Induction in Large Language Models [115.08894603023712]
Large language models (LLMs) have become increasingly proficient at simulating various personality traits.
We present a neuron-based approach for personality trait induction in LLMs.
arXiv Detail & Related papers (2024-10-16T07:47:45Z) - Rediscovering the Latent Dimensions of Personality with Large Language Models as Trait Descriptors [4.814107439144414]
We introduce a novel approach that uncovers latent personality dimensions in large language models (LLMs)
Our experiments show that LLMs "rediscover" core personality traits such as extraversion, agreeableness, conscientiousness, neuroticism, and openness without relying on direct questionnaire inputs.
We can use the derived principal components to assess personality along the Big Five dimensions, and achieve improvements in average personality prediction accuracy of up to 5% over fine-tuned models.
arXiv Detail & Related papers (2024-09-16T00:24:40Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Revisiting the Reliability of Psychological Scales on Large Language Models [62.57981196992073]
This study aims to determine the reliability of applying personality assessments to Large Language Models.
Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory.
arXiv Detail & Related papers (2023-05-31T15:03:28Z) - PersonaLLM: Investigating the Ability of Large Language Models to Express Personality Traits [30.770525830385637]
We study the behavior of large language models (LLMs) based on the Big Five personality model.
Results show that LLM personas' self-reported BFI scores are consistent with their designated personality types.
Human evaluation shows that humans can perceive some personality traits with an accuracy of up to 80%.
arXiv Detail & Related papers (2023-05-04T04:58:00Z) - Evaluating and Inducing Personality in Pre-trained Language Models [78.19379997967191]
We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
arXiv Detail & Related papers (2022-05-20T07:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.