Illuminating the Black Box: A Psychometric Investigation into the
Multifaceted Nature of Large Language Models
- URL: http://arxiv.org/abs/2312.14202v1
- Date: Thu, 21 Dec 2023 04:57:21 GMT
- Title: Illuminating the Black Box: A Psychometric Investigation into the
Multifaceted Nature of Large Language Models
- Authors: Yang Lu, Jordan Yu, Shou-Hsuan Stephen Huang
- Abstract summary: This study explores the idea of AI Personality or AInality suggesting that Large Language Models (LLMs) exhibit patterns similar to human personalities.
Using projective tests, we uncover hidden aspects of LLM personalities that are not easily accessible through direct questioning.
Our machine learning analysis revealed that LLMs exhibit distinct AInality traits and manifest diverse personality types, demonstrating dynamic shifts in response to external instructions.
- Score: 3.692410936160711
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study explores the idea of AI Personality or AInality suggesting that
Large Language Models (LLMs) exhibit patterns similar to human personalities.
Assuming that LLMs share these patterns with humans, we investigate using
human-centered psychometric tests such as the Myers-Briggs Type Indicator
(MBTI), Big Five Inventory (BFI), and Short Dark Triad (SD3) to identify and
confirm LLM personality types. By introducing role-play prompts, we demonstrate
the adaptability of LLMs, showing their ability to switch dynamically between
different personality types. Using projective tests, such as the Washington
University Sentence Completion Test (WUSCT), we uncover hidden aspects of LLM
personalities that are not easily accessible through direct questioning.
Projective tests allowed for a deep exploration of LLMs cognitive processes and
thought patterns and gave us a multidimensional view of AInality. Our machine
learning analysis revealed that LLMs exhibit distinct AInality traits and
manifest diverse personality types, demonstrating dynamic shifts in response to
external instructions. This study pioneers the application of projective tests
on LLMs, shedding light on their diverse and adaptable AInality traits.
Related papers
- Neuron-based Personality Trait Induction in Large Language Models [115.08894603023712]
Large language models (LLMs) have become increasingly proficient at simulating various personality traits.
We present a neuron-based approach for personality trait induction in LLMs.
arXiv Detail & Related papers (2024-10-16T07:47:45Z) - Mind Scramble: Unveiling Large Language Model Psychology Via Typoglycemia [27.650551131885152]
Research into large language models (LLMs) has shown promise in addressing complex tasks in the physical world.
Studies suggest that powerful LLMs, like GPT-4, are beginning to exhibit human-like cognitive abilities.
arXiv Detail & Related papers (2024-10-02T15:47:25Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - Do LLMs Have Distinct and Consistent Personality? TRAIT: Personality Testset designed for LLMs with Psychometrics [29.325576963215163]
Large Language Models (LLMs) have led to their adaptation in various domains as conversational agents.
We introduce TRAIT, a new benchmark consisting of 8K multi-choice questions designed to assess the personality of LLMs.
LLMs exhibit distinct and consistent personality, which is highly influenced by their training data.
arXiv Detail & Related papers (2024-06-20T19:50:56Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - Identifying Multiple Personalities in Large Language Models with
External Evaluation [6.657168333238573]
Large Language Models (LLMs) are integrated with human daily applications rapidly.
Many recent studies quantify LLMs' personalities using self-assessment tests that are created for humans.
Yet many critiques question the applicability and reliability of these self-assessment tests when applied to LLMs.
arXiv Detail & Related papers (2024-02-22T18:57:20Z) - Open Models, Closed Minds? On Agents Capabilities in Mimicking Human Personalities through Open Large Language Models [4.742123770879715]
The work represents a step up in understanding the dense relationship between NLP and human psychology through the lens of Open LLMs.
Our approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities.
arXiv Detail & Related papers (2024-01-13T16:41:40Z) - Do LLMs Possess a Personality? Making the MBTI Test an Amazing
Evaluation for Large Language Models [2.918940961856197]
We aim to investigate the feasibility of using the Myers-Briggs Type Indicator (MBTI), a widespread human personality assessment tool, as an evaluation metric for large language models (LLMs)
Specifically, experiments will be conducted to explore: 1) the personality types of different LLMs, 2) the possibility of changing the personality types by prompt engineering, and 3) How does the training dataset affect the model's personality.
arXiv Detail & Related papers (2023-07-30T09:34:35Z) - Revisiting the Reliability of Psychological Scales on Large Language Models [62.57981196992073]
This study aims to determine the reliability of applying personality assessments to Large Language Models.
Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory.
arXiv Detail & Related papers (2023-05-31T15:03:28Z) - Can ChatGPT Assess Human Personalities? A General Evaluation Framework [70.90142717649785]
Large Language Models (LLMs) have produced impressive results in various areas, but their potential human-like psychology is still largely unexplored.
This paper presents a generic evaluation framework for LLMs to assess human personalities based on Myers Briggs Type Indicator (MBTI) tests.
arXiv Detail & Related papers (2023-03-01T06:16:14Z) - Evaluating and Inducing Personality in Pre-trained Language Models [78.19379997967191]
We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
arXiv Detail & Related papers (2022-05-20T07:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.