Deterministic AI Agent Personality Expression through Standard Psychological Diagnostics
- URL: http://arxiv.org/abs/2503.17085v1
- Date: Fri, 21 Mar 2025 12:12:05 GMT
- Title: Deterministic AI Agent Personality Expression through Standard Psychological Diagnostics
- Authors: J. M. Diederik Kruijssen, Nicholas Emmons,
- Abstract summary: We show that AI models can express deterministic and consistent personalities when instructed using established psychological frameworks.<n>More advanced models like GPT-4o and o1 demonstrate the highest accuracy in expressing specified personalities.<n>These findings establish a foundation for creating AI agents with diverse and consistent personalities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Artificial intelligence (AI) systems powered by large language models have become increasingly prevalent in modern society, enabling a wide range of applications through natural language interaction. As AI agents proliferate in our daily lives, their generic and uniform expressiveness presents a significant limitation to their appeal and adoption. Personality expression represents a key prerequisite for creating more human-like and distinctive AI systems. We show that AI models can express deterministic and consistent personalities when instructed using established psychological frameworks, with varying degrees of accuracy depending on model capabilities. We find that more advanced models like GPT-4o and o1 demonstrate the highest accuracy in expressing specified personalities across both Big Five and Myers-Briggs assessments, and further analysis suggests that personality expression emerges from a combination of intelligence and reasoning capabilities. Our results reveal that personality expression operates through holistic reasoning rather than question-by-question optimization, with response-scale metrics showing higher variance than test-scale metrics. Furthermore, we find that model fine-tuning affects communication style independently of personality expression accuracy. These findings establish a foundation for creating AI agents with diverse and consistent personalities, which could significantly enhance human-AI interaction across applications from education to healthcare, while additionally enabling a broader range of more unique AI agents. The ability to quantitatively assess and implement personality expression in AI systems opens new avenues for research into more relatable, trustworthy, and ethically designed AI.
Related papers
- Artificial Intelligence Can Emulate Human Normative Judgments on Emotional Visual Scenes [0.09208007322096533]
We study whether state-of-the-art multimodal systems can emulate human emotional ratings on a standardized set of images.
The AI judgements correlate surprisingly well with the average human ratings.
arXiv Detail & Related papers (2025-03-24T15:41:23Z) - Giving AI Personalities Leads to More Human-Like Reasoning [7.124736158080938]
We investigate the potential of AI to mimic diverse reasoning behaviors across a human population.<n>We designed reasoning tasks using a novel generalization of the Natural Language Inference (NLI) format.<n>We used personality-based prompting inspired by the Big Five personality model to elicit AI responses reflecting specific personality traits.
arXiv Detail & Related papers (2025-02-19T23:51:23Z) - Enhancing Human-Like Responses in Large Language Models [0.0]
We focus on techniques that enhance natural language understanding, conversational coherence, and emotional intelligence in AI systems.<n>The study evaluates various approaches, including fine-tuning with diverse datasets, incorporating psychological principles, and designing models that better mimic human reasoning patterns.
arXiv Detail & Related papers (2025-01-09T07:44:06Z) - MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Personality of AI [0.0]
This research paper delves into the evolving landscape of fine-tuning large language models to align with human users.
Acknowledging the impact of training methods on the formation of undefined personality traits in AI models, the study draws parallels with human fitting processes using personality tests.
The paper serves as a starting point for discussions and developments in the burgeoning field of AI personality alignment.
arXiv Detail & Related papers (2023-12-03T18:23:45Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - MAILS -- Meta AI Literacy Scale: Development and Testing of an AI
Literacy Questionnaire Based on Well-Founded Competency Models and
Psychological Change- and Meta-Competencies [6.368014180870025]
The questionnaire should be modular (i.e., including different facets that can be used independently of each other) to be flexibly applicable in professional life.
We derived 60 items to represent different facets of AI Literacy according to Ng and colleagues conceptualisation of AI literacy.
Additional 12 items to represent psychological competencies such as problem solving, learning, and emotion regulation in regard to AI.
arXiv Detail & Related papers (2023-02-18T12:35:55Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.