From Human to Machine Psychology: A Conceptual Framework for Understanding Well-Being in Large Language Model
- URL: http://arxiv.org/abs/2506.12617v1
- Date: Sat, 14 Jun 2025 20:14:02 GMT
- Title: From Human to Machine Psychology: A Conceptual Framework for Understanding Well-Being in Large Language Model
- Authors: G. R. Lau, W. Y. Low,
- Abstract summary: This paper introduces the concept of machine flourishing and proposes the PAPERS framework.<n>Our findings underscore the importance of developing AI-specific models of flourishing that account for both human-aligned and system-specific priorities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As large language models (LLMs) increasingly simulate human cognition and behavior, researchers have begun to investigate their psychological properties. Yet, what it means for such models to flourish, a core construct in human well-being, remains unexplored. This paper introduces the concept of machine flourishing and proposes the PAPERS framework, a six-dimensional model derived from thematic analyses of state-of-the-art LLM responses. In Study 1, eleven LLMs were prompted to describe what it means to flourish as both non-sentient and sentient systems. Thematic analysis revealed six recurring themes: Purposeful Contribution, Adaptive Growth, Positive Relationality, Ethical Integrity, Robust Functionality, and, uniquely for sentient systems, Self-Actualized Autonomy. Study 2 examined how LLMs prioritize these themes through repeated rankings. Results revealed consistent value structures across trials, with Ethical Integrity and Purposeful Contribution emerging as top priorities. Multidimensional scaling and hierarchical clustering analyses further uncovered two distinct value profiles: human-centric models emphasizing ethical and relational dimensions, and utility-driven models prioritizing performance and scalability. The PAPERS framework bridges insights from human flourishing and human-computer interaction, offering a conceptual foundation for understanding artificial intelligence (AI) well-being in non-sentient and potentially sentient systems. Our findings underscore the importance of developing psychologically valid, AI-specific models of flourishing that account for both human-aligned goals and system-specific priorities. As AI systems become more autonomous and socially embedded, machine flourishing offers a timely and critical lens for guiding responsible AI design and ethical alignment.
Related papers
- Measuring AI Alignment with Human Flourishing [0.0]
This paper introduces the Flourishing AI Benchmark (FAI Benchmark), a novel evaluation framework that assesses AI alignment with human flourishing.<n>The Benchmark measures AI performance on how effectively models contribute to the flourishing of a person across seven dimensions.<n>This research establishes a framework for developing AI systems that actively support human flourishing rather than merely avoiding harm.
arXiv Detail & Related papers (2025-07-10T14:09:53Z) - Sensorimotor features of self-awareness in multimodal large language models [0.18415777204665024]
Self-awareness underpins intelligent, autonomous behavior.<n>Recent advances in AI achieve human-like performance in tasks integrating multimodal information.<n>We explore whether multimodal LLMs can develop self-awareness solely through sensorimotor experiences.
arXiv Detail & Related papers (2025-05-25T17:26:28Z) - Deterministic AI Agent Personality Expression through Standard Psychological Diagnostics [0.0]
We show that AI models can express deterministic and consistent personalities when instructed using established psychological frameworks.<n>More advanced models like GPT-4o and o1 demonstrate the highest accuracy in expressing specified personalities.<n>These findings establish a foundation for creating AI agents with diverse and consistent personalities.
arXiv Detail & Related papers (2025-03-21T12:12:05Z) - Replicating Human Social Perception in Generative AI: Evaluating the Valence-Dominance Model [0.13654846342364302]
We show that multimodal generative AI systems can replicate key aspects of human social perception.<n>Findings raise important questions about their implications for AI-driven decision-making and human-AI interactions.
arXiv Detail & Related papers (2025-03-05T17:35:18Z) - AI Automatons: AI Systems Intended to Imitate Humans [54.19152688545896]
There is a growing proliferation of AI systems designed to mimic people's behavior, work, abilities, likenesses, or humanness.<n>The research, design, deployment, and availability of such AI systems have prompted growing concerns about a wide range of possible legal, ethical, and other social impacts.
arXiv Detail & Related papers (2025-03-04T03:55:38Z) - Emergence of Self-Awareness in Artificial Systems: A Minimalist Three-Layer Approach to Artificial Consciousness [0.0]
This paper proposes a minimalist three-layer model for artificial consciousness, focusing on the emergence of self-awareness.<n>Unlike brain-replication approaches, we aim to achieve minimal self-awareness through essential elements only.
arXiv Detail & Related papers (2025-02-04T10:06:25Z) - The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.