Can AI Understand Human Personality? -- Comparing Human Experts and AI Systems at Predicting Personality Correlations
- URL: http://arxiv.org/abs/2406.08170v1
- Date: Wed, 12 Jun 2024 13:03:38 GMT
- Title: Can AI Understand Human Personality? -- Comparing Human Experts and AI Systems at Predicting Personality Correlations
- Authors: Philipp Schoenegger, Spencer Greenberg, Alexander Grishin, Joshua Lewis, Lucius Caviola,
- Abstract summary: We test the abilities of specialised deep neural networks like PersonalityMap as well as general LLMs like GPT-4o and Claude 3 Opus.
We find that when compared with individual humans, all AI models make better predictions than the vast majority of lay people and academic experts.
- Score: 41.07853967415879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We test the abilities of specialised deep neural networks like PersonalityMap as well as general LLMs like GPT-4o and Claude 3 Opus in understanding human personality. Specifically, we compare their ability to predict correlations between personality items to the abilities of lay people and academic experts. We find that when compared with individual humans, all AI models make better predictions than the vast majority of lay people and academic experts. However, when selecting the median prediction for each item, we find a different pattern: Experts and PersonalityMap outperform LLMs and lay people on most measures. Our results suggest that while frontier LLMs' are better than most individual humans at predicting correlations between personality items, specialised models like PersonalityMap continue to match or exceed expert human performance even on some outcome measures where LLMs underperform. This provides evidence both in favour of the general capabilities of large language models and in favour of the continued place for specialised models trained and deployed for specific domains.
Related papers
- Evaluating Personality Traits in Large Language Models: Insights from Psychological Questionnaires [3.6001840369062386]
This work applies psychological tools to Large Language Models in diverse scenarios to generate personality profiles.
Our findings reveal that LLMs exhibit unique traits, varying characteristics, and distinct personality profiles even within the same family of models.
arXiv Detail & Related papers (2025-02-07T16:12:52Z) - Judgment of Learning: A Human Ability Beyond Generative Artificial Intelligence [0.0]
Large language models (LLMs) increasingly mimic human cognition in various language-based tasks.
We introduce a cross-agent prediction model to assess whether ChatGPT-based LLMs align with human judgments of learning (JOL)
Our results revealed that while human JOL reliably predicted actual memory performance, none of the tested LLMs demonstrated comparable predictive accuracy.
arXiv Detail & Related papers (2024-10-17T09:42:30Z) - Personality Alignment of Large Language Models [26.071445846818914]
Current methods for aligning large language models (LLMs) typically aim to reflect general human values and behaviors.
We introduce the concept of Personality Alignment.
This approach tailors LLMs' responses and decisions to match the specific preferences of individual users or closely related groups.
arXiv Detail & Related papers (2024-08-21T17:09:00Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - P-Tailor: Customizing Personality Traits for Language Models via Mixture of Specialized LoRA Experts [34.374681921626205]
We learn specialized LoRA experts to represent various traits, such as openness, conscientiousness, extraversion, agreeableness and neuroticism.
We integrate P-Tailor with a personality loss specialization, promoting experts to specialize in distinct personality traits.
arXiv Detail & Related papers (2024-06-18T12:25:13Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Large Language Models Can Infer Psychological Dispositions of Social Media Users [1.0923877073891446]
We test whether GPT-3.5 and GPT-4 can derive the Big Five personality traits from users' Facebook status updates in a zero-shot learning scenario.
Our results show an average correlation of r =.29 (range = [.22,.33]) between LLM-inferred and self-reported trait scores.
predictions were found to be more accurate for women and younger individuals on several traits, suggesting a potential bias stemming from the underlying training data or differences in online self-expression.
arXiv Detail & Related papers (2023-09-13T01:27:48Z) - Can ChatGPT Assess Human Personalities? A General Evaluation Framework [70.90142717649785]
Large Language Models (LLMs) have produced impressive results in various areas, but their potential human-like psychology is still largely unexplored.
This paper presents a generic evaluation framework for LLMs to assess human personalities based on Myers Briggs Type Indicator (MBTI) tests.
arXiv Detail & Related papers (2023-03-01T06:16:14Z) - Evaluating and Inducing Personality in Pre-trained Language Models [78.19379997967191]
We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
arXiv Detail & Related papers (2022-05-20T07:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.