The Cultural Psychology of Large Language Models: Is ChatGPT a Holistic
or Analytic Thinker?
- URL: http://arxiv.org/abs/2308.14242v1
- Date: Mon, 28 Aug 2023 01:05:18 GMT
- Title: The Cultural Psychology of Large Language Models: Is ChatGPT a Holistic
or Analytic Thinker?
- Authors: Chuanyang Jin, Songyang Zhang, Tianmin Shu, and Zhihan Cui
- Abstract summary: Research in cultural psychology indicated significant differences in the cognitive processes of Eastern and Western people.
ChatGPT consistently tends towards Eastern holistic thinking.
ChatGPT does not significantly lean towards the East or the West.
- Score: 30.215769791433953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prevalent use of Large Language Models (LLMs) has necessitated studying
their mental models, yielding noteworthy theoretical and practical
implications. Current research has demonstrated that state-of-the-art LLMs,
such as ChatGPT, exhibit certain theory of mind capabilities and possess
relatively stable Big Five and/or MBTI personality traits. In addition,
cognitive process features form an essential component of these mental models.
Research in cultural psychology indicated significant differences in the
cognitive processes of Eastern and Western people when processing information
and making judgments. While Westerners predominantly exhibit analytical
thinking that isolates things from their environment to analyze their nature
independently, Easterners often showcase holistic thinking, emphasizing
relationships and adopting a global viewpoint. In our research, we probed the
cultural cognitive traits of ChatGPT. We employed two scales that directly
measure the cognitive process: the Analysis-Holism Scale (AHS) and the Triadic
Categorization Task (TCT). Additionally, we used two scales that investigate
the value differences shaped by cultural thinking: the Dialectical Self Scale
(DSS) and the Self-construal Scale (SCS). In cognitive process tests (AHS/TCT),
ChatGPT consistently tends towards Eastern holistic thinking, but regarding
value judgments (DSS/SCS), ChatGPT does not significantly lean towards the East
or the West. We suggest that the result could be attributed to both the
training paradigm and the training data in LLM development. We discuss the
potential value of this finding for AI research and directions for future
research.
Related papers
- Quantifying AI Psychology: A Psychometrics Benchmark for Large Language Models [57.518784855080334]
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants.
This paper presents a framework for investigating psychology dimension in LLMs, including psychological identification, assessment dataset curation, and assessment with results validation.
We introduce a comprehensive psychometrics benchmark for LLMs that covers six psychological dimensions: personality, values, emotion, theory of mind, motivation, and intelligence.
arXiv Detail & Related papers (2024-06-25T16:09:08Z) - The high dimensional psychological profile and cultural bias of ChatGPT [11.607356361021482]
This study first measured ChatGPT in 84 dimensions of psychological characteristics.
ChatGPT's cultural value patterns are dissimilar to those of various countries/regions worldwide.
Analysis of ChatGPT's performance in eight decision-making tasks involving interactions with humans from different countries/regions revealed clear cultural stereotypes.
arXiv Detail & Related papers (2024-05-06T11:45:59Z) - Is Cognition and Action Consistent or Not: Investigating Large Language
Model's Personality [12.162460438332152]
We investigate the reliability of Large Language Models (LLMs) in professing human-like personality traits through responses to personality questionnaires.
Our goal is to evaluate the consistency between LLMs' professed personality inclinations and their actual "behavior"
We propose hypotheses for the observed results based on psychological theories and metrics.
arXiv Detail & Related papers (2024-02-22T16:32:08Z) - PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for
Personality Detection [50.66968526809069]
We propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner.
Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection.
arXiv Detail & Related papers (2023-10-31T08:23:33Z) - Investigating Large Language Models' Perception of Emotion Using
Appraisal Theory [3.0902630634005797]
Large Language Models (LLM) have significantly advanced in recent years and are now being used by the general public.
In this work, we investigate their emotion perception through the lens of appraisal and coping theory.
We applied SCPQ to three recent LLMs from OpenAI, davinci-003, ChatGPT, and GPT-4 and compared the results with predictions from the appraisal theory and human data.
arXiv Detail & Related papers (2023-10-03T16:34:47Z) - Large Language Models Can Infer Psychological Dispositions of Social Media Users [1.0923877073891446]
We test whether GPT-3.5 and GPT-4 can derive the Big Five personality traits from users' Facebook status updates in a zero-shot learning scenario.
Our results show an average correlation of r =.29 (range = [.22,.33]) between LLM-inferred and self-reported trait scores.
predictions were found to be more accurate for women and younger individuals on several traits, suggesting a potential bias stemming from the underlying training data or differences in online self-expression.
arXiv Detail & Related papers (2023-09-13T01:27:48Z) - Revisiting the Reliability of Psychological Scales on Large Language Models [62.57981196992073]
This study aims to determine the reliability of applying personality assessments to Large Language Models.
Analysis of 2,500 settings per model, including GPT-3.5, GPT-4, Gemini-Pro, and LLaMA-3.1, reveals that various LLMs show consistency in responses to the Big Five Inventory.
arXiv Detail & Related papers (2023-05-31T15:03:28Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Evaluating and Inducing Personality in Pre-trained Language Models [78.19379997967191]
We draw inspiration from psychometric studies by leveraging human personality theory as a tool for studying machine behaviors.
To answer these questions, we introduce the Machine Personality Inventory (MPI) tool for studying machine behaviors.
MPI follows standardized personality tests, built upon the Big Five Personality Factors (Big Five) theory and personality assessment inventories.
We devise a Personality Prompting (P2) method to induce LLMs with specific personalities in a controllable way.
arXiv Detail & Related papers (2022-05-20T07:32:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.