Modeling Layered Consciousness with Multi-Agent Large Language Models
- URL: http://arxiv.org/abs/2510.17844v1
- Date: Fri, 10 Oct 2025 07:08:34 GMT
- Title: Modeling Layered Consciousness with Multi-Agent Large Language Models
- Authors: Sang Hun Kim, Jongmin Lee, Dongkyu Park, So Young Lee, Yosep Chong,
- Abstract summary: We propose a framework for modeling artificial consciousness in large language models (LLMs)<n>Our textbfPsychodynamic Model simulates self-awareness, preconsciousness, and unconsciousness through agent interaction.
- Score: 9.566692471247995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a multi-agent framework for modeling artificial consciousness in large language models (LLMs), grounded in psychoanalytic theory. Our \textbf{Psychodynamic Model} simulates self-awareness, preconsciousness, and unconsciousness through agent interaction, guided by a Personalization Module combining fixed traits and dynamic needs. Using parameter-efficient fine-tuning on emotionally rich dialogues, the system was evaluated across eight personalized conditions. An LLM as a judge approach showed a 71.2\% preference for the fine-tuned model, with improved emotional depth and reduced output variance, demonstrating its potential for adaptive, personalized cognition.
Related papers
- Dynamic Personality Adaptation in Large Language Models via State Machines [1.6986898305640261]
We propose a model-agnostic framework for dynamic personality simulation that employs state machines to represent latent personality states.<n>Part of our architecture is a modular pipeline for continuous personality scoring that evaluates dialogues along latent axes.<n>Results demonstrate that the system successfully adapts its personality state to user inputs, but also influences user behavior.
arXiv Detail & Related papers (2026-02-25T18:05:11Z) - Projective Psychological Assessment of Large Multimodal Models Using Thematic Apperception Tests [5.119837168333715]
This study examines whether the personality traits of Large Multimodal Models (LMMs) can be assessed through non-language-based modalities.<n>Evaluators demonstrated an excellent ability to understand and analyze TAT responses.
arXiv Detail & Related papers (2026-02-19T06:08:33Z) - Synthetic Interaction Data for Scalable Personalization in Large Language Models [67.31884245564086]
We introduce a high-fidelity synthetic data generation framework called PersonaGym.<n>Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process.<n>We release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories.
arXiv Detail & Related papers (2026-02-12T20:41:22Z) - HUMANLLM: Benchmarking and Reinforcing LLM Anthropomorphism via Human Cognitive Patterns [59.17423586203706]
We present HUMANLLM, a framework treating psychological patterns as interacting causal forces.<n>We construct 244 patterns from 12,000 academic papers and synthesize 11,359 scenarios where 2-5 patterns reinforce, conflict, or modulate each other.<n>Our dual-level checklists evaluate both individual pattern fidelity and emergent multi-pattern dynamics, achieving strong human alignment.
arXiv Detail & Related papers (2026-01-15T08:56:53Z) - RoleRMBench & RoleRM: Towards Reward Modeling for Profile-Based Role Play in Dialogue Systems [85.16327248973387]
We develop RoleRM, a reward model trained with Continuous Implicit Preferences (CIP)<n>We show RoleRM surpasses strong open- and closed-source reward models by over 24% on average.<n>Our findings highlight the importance of continuous preference representation and annotation consistency, establishing a foundation for subjective alignment in human-centered dialogue systems.
arXiv Detail & Related papers (2025-12-11T12:04:46Z) - A Multi-Component AI Framework for Computational Psychology: From Robust Predictive Modeling to Deployed Generative Dialogue [0.0]
This paper presents a comprehensive framework designed to bridge the gap between isolated predictive modeling and an interactive system for psychological analysis.<n>The methodology encompasses a rigorous, end-to-end development lifecycle.<n>Key findings include the successful stabilization of transformer-based regression models for affective computing.
arXiv Detail & Related papers (2025-09-16T13:33:40Z) - Towards Simulating Social Influence Dynamics with LLM-based Multi-agents [0.0]
We investigate whether multi-agent simulations can reproduce core human social dynamics observed in online forums.<n>Our findings indicate that smaller models exhibit higher conformity rates, whereas models optimized for reasoning are more resistant to social influence.
arXiv Detail & Related papers (2025-07-30T08:14:40Z) - Revisiting Multi-Agent World Modeling from a Diffusion-Inspired Perspective [54.77404771454794]
We develop a flexible and robust world model for Multi-Agent Reinforcement Learning (MARL) using diffusion models.<n>Our method, Diffusion-Inspired Multi-Agent world model (DIMA), achieves state-of-the-art performance across multiple multi-agent control benchmarks.
arXiv Detail & Related papers (2025-05-27T09:11:38Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, a framework for better data construction and model tuning.<n>For insufficient data usage, we incorporate strategies such as Chain-of-Thought prompting and anti-induction.<n>For rigid behavior patterns, we design the tuning process and introduce automated DPO to enhance the specificity and dynamism of the models' personalities.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Aligning Vision Models with Human Aesthetics in Retrieval: Benchmarks and Algorithms [91.19304518033144]
We aim to align vision models with human aesthetic standards in a retrieval system.
We propose a preference-based reinforcement learning method that fine-tunes the vision models to better align the vision models with human aesthetics.
arXiv Detail & Related papers (2024-06-13T17:59:20Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.