PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection
- URL: http://arxiv.org/abs/2310.20256v2
- Date: Sun, 5 Nov 2023 03:19:18 GMT
- Title: PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection
- Authors: Tao Yang, Tianyuan Shi, Fanqi Wan, Xiaojun Quan, Qifan Wang, Bingzhe
Wu, Jiaxiang Wu
- Abstract summary: We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
- Score: 50.66968526809069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in large language models (LLMs), such as ChatGPT, have
showcased remarkable zero-shot performance across various NLP tasks. However,
the potential of LLMs in personality detection, which involves identifying an
individual's personality from their written texts, remains largely unexplored.
Drawing inspiration from Psychological Questionnaires, which are carefully
designed by psychologists to evaluate individual personality traits through a
series of targeted items, we argue that these items can be regarded as a
collection of well-structured chain-of-thought (CoT) processes. By
incorporating these processes, LLMs can enhance their capabilities to make more
reasonable inferences on personality from textual input. In light of this, we
propose a novel personality detection method, called PsyCoT, which mimics the
way individuals complete psychological questionnaires in a multi-turn dialogue
manner. In particular, we employ a LLM as an AI assistant with a specialization
in text analysis. We prompt the assistant to rate individual items at each turn
and leverage the historical rating results to derive a conclusive personality
preference. Our experiments demonstrate that PsyCoT significantly improves the
performance and robustness of GPT-3.5 in personality detection, achieving an
average F1 score improvement of 4.23/10.63 points on two benchmark datasets
compared to the standard prompting method. Our code is available at
https://github.com/TaoYang225/PsyCoT.
Related papers
- Humanity in AI: Detecting the Personality of Large Language Models [0.0]
Questionnaires are a common method for detecting the personality of Large Language Models (LLMs)
We propose combining text mining with questionnaires method.
We find that the personalities of LLMs are derived from their pre-trained data.
arXiv Detail & Related papers (2024-10-11T05:53:11Z) - Revealing Personality Traits: A New Benchmark Dataset for Explainable Personality Recognition on Dialogues [63.936654900356004]
Personality recognition aims to identify the personality traits implied in user data such as dialogues and social media posts.
We propose a novel task named Explainable Personality Recognition, aiming to reveal the reasoning process as supporting evidence of the personality trait.
arXiv Detail & Related papers (2024-09-29T14:41:43Z) - Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - EERPD: Leveraging Emotion and Emotion Regulation for Improving Personality Detection [19.98674724777821]
We propose a new personality detection method called EERPD.
This method introduces the use of emotion regulation, a psychological concept highly correlated with personality, for personality prediction.
Experimental results demonstrate that EERPD significantly enhances the accuracy and robustness of personality detection.
arXiv Detail & Related papers (2024-06-23T11:18:55Z) - Dynamic Generation of Personalities with Large Language Models [20.07145733116127]
We introduce Dynamic Personality Generation (DPG), a dynamic personality generation method based on Hypernetworks.
We embed the Big Five personality theory into GPT-4 to form a personality assessment machine.
We then use this personality assessment machine to evaluate dialogues in script data, resulting in a personality-dialogue dataset.
arXiv Detail & Related papers (2024-04-10T15:17:17Z) - LLMvsSmall Model? Large Language Model Based Text Augmentation Enhanced
Personality Detection Model [58.887561071010985]
Personality detection aims to detect one's personality traits underlying in social media posts.
Most existing methods learn post features directly by fine-tuning the pre-trained language models.
We propose a large language model (LLM) based text augmentation enhanced personality detection model.
arXiv Detail & Related papers (2024-03-12T12:10:18Z) - Can ChatGPT Read Who You Are? [10.577227353680994]
We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants.
We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text.
arXiv Detail & Related papers (2023-12-26T14:43:04Z) - Editing Personality for Large Language Models [73.59001811199823]
This paper introduces an innovative task focused on editing the personality traits of Large Language Models (LLMs)
We construct PersonalityEdit, a new benchmark dataset to address this task.
arXiv Detail & Related papers (2023-10-03T16:02:36Z) - Can ChatGPT Assess Human Personalities? A General Evaluation Framework [70.90142717649785]
Large Language Models (LLMs) have produced impressive results in various areas, but their potential human-like psychology is still largely unexplored.
This paper presents a generic evaluation framework for LLMs to assess human personalities based on Myers Briggs Type Indicator (MBTI) tests.
arXiv Detail & Related papers (2023-03-01T06:16:14Z) - Identifying and Manipulating the Personality Traits of Language Models [9.213700601337383]
We investigate whether perceived personality in language models is exhibited consistently in their language generation.
We show that language models such as BERT and GPT2 can consistently identify and reflect personality markers in different contexts.
This behavior illustrates an ability to be manipulated in a highly predictable way, and frames them as tools for identifying personality traits and controlling personas in applications such as dialog systems.
arXiv Detail & Related papers (2022-12-20T14:24:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.