AFSPP: Agent Framework for Shaping Preference and Personality with Large
Language Models
- URL: http://arxiv.org/abs/2401.02870v1
- Date: Fri, 5 Jan 2024 15:52:59 GMT
- Title: AFSPP: Agent Framework for Shaping Preference and Personality with Large
Language Models
- Authors: Zihong He, Changwang Zhang
- Abstract summary: We propose Agent Framework for Shaping Preference and Personality (AFSPP)
AFSPP explores the multifaceted impact of social networks and subjective consciousness on Agents' preference and personality formation.
It can significantly enhance the efficiency and scope of psychological experiments.
- Score: 4.6251098692410855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The evolution of Large Language Models (LLMs) has introduced a new paradigm
for investigating human behavior emulation. Recent research has employed
LLM-based Agents to create a sociological research environment, in which agents
exhibit behavior based on the unfiltered characteristics of large language
models. However, these studies overlook the iterative development within a
human-like setting - Human preferences and personalities are complex, shaped by
various factors and subject to ongoing change as a result of environmental and
subjective influences. In light of this observation, we propose Agent Framework
for Shaping Preference and Personality (AFSPP), exploring the multifaceted
impact of social networks and subjective consciousness on LLM-based Agents'
preference and personality formation. With AFSPP, we have, for the first time,
successfully replicated several key findings from human personality
experiments. And other AFSPP-based experimental results indicate that plan
making, sensory perceptions and social networking with subjective information,
wield the most pronounced influence on preference shaping. AFSPP can
significantly enhance the efficiency and scope of psychological experiments,
while yielding valuable insights for Trustworthy Artificial Intelligence
research for strategies to prevent undesirable preference and personality
development.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Designing LLM-Agents with Personalities: A Psychometric Approach [0.47498241053872914]
This research introduces a novel methodology for assigning quantifiable, controllable and psychometrically validated personalities to Agents.
It seeks to overcome the constraints of human subject studies, proposing Agents as an accessible tool for social science inquiry.
arXiv Detail & Related papers (2024-10-25T01:05:04Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - Is persona enough for personality? Using ChatGPT to reconstruct an agent's latent personality from simple descriptions [2.6080756513915824]
Personality, a fundamental aspect of human cognition, contains a range of traits that influence behaviors, thoughts, and emotions.
This paper explores the capabilities of large language models (LLMs) in reconstructing these complex cognitive attributes based only on simple descriptions containing socio-demographic and personality type information.
arXiv Detail & Related papers (2024-06-18T02:32:57Z) - ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models [56.08917291606421]
ResearchAgent is a large language model-powered research idea writing agent.
It generates problems, methods, and experiment designs while iteratively refining them based on scientific literature.
We experimentally validate our ResearchAgent on scientific publications across multiple disciplines.
arXiv Detail & Related papers (2024-04-11T13:36:29Z) - Is Cognition and Action Consistent or Not: Investigating Large Language
Model's Personality [12.162460438332152]
We investigate the reliability of Large Language Models (LLMs) in professing human-like personality traits through responses to personality questionnaires.
Our goal is to evaluate the consistency between LLMs' professed personality inclinations and their actual "behavior"
We propose hypotheses for the observed results based on psychological theories and metrics.
arXiv Detail & Related papers (2024-02-22T16:32:08Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Two-Faced Humans on Twitter and Facebook: Harvesting Social Multimedia
for Human Personality Profiling [74.83957286553924]
We infer the Myers-Briggs Personality Type indicators by applying a novel multi-view fusion framework, called "PERS"
Our experimental results demonstrate the PERS's ability to learn from multi-view data for personality profiling by efficiently leveraging on the significantly different data arriving from diverse social multimedia sources.
arXiv Detail & Related papers (2021-06-20T10:48:49Z) - Expertise and confidence explain how social influence evolves along
intellective tasks [10.525352489242396]
We study interpersonal influence in small groups of individuals who collectively execute a sequence of intellective tasks.
We report empirical evidence on theories of transactive memory systems, social comparison, and confidences on the origins of social influence.
We propose a cognitive dynamical model inspired by these theories to describe the process by which individuals adjust interpersonal influences over time.
arXiv Detail & Related papers (2020-11-13T23:48:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.