Personas Evolved: Designing Ethical LLM-Based Conversational Agent Personalities
- URL: http://arxiv.org/abs/2502.20513v1
- Date: Thu, 27 Feb 2025 20:46:54 GMT
- Title: Personas Evolved: Designing Ethical LLM-Based Conversational Agent Personalities
- Authors: Smit Desai, Mateusz Dubiel, Nima Zargham, Thomas Mildner, Laura Spillner,
- Abstract summary: Large Language Models (LLMs) have revolutionized Conversational User Interfaces (CUIs)<n>LLMs generate responses dynamically from vast datasets, making their behavior less predictable and harder to govern.<n>This workshop aims to bridge the gap between CUI and broader AI communities by fostering a cross-disciplinary dialogue.
- Score: 8.397165953794403
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of Large Language Models (LLMs) has revolutionized Conversational User Interfaces (CUIs), enabling more dynamic, context-aware, and human-like interactions across diverse domains, from social sciences to healthcare. However, the rapid adoption of LLM-based personas raises critical ethical and practical concerns, including bias, manipulation, and unforeseen social consequences. Unlike traditional CUIs, where personas are carefully designed with clear intent, LLM-based personas generate responses dynamically from vast datasets, making their behavior less predictable and harder to govern. This workshop aims to bridge the gap between CUI and broader AI communities by fostering a cross-disciplinary dialogue on the responsible design and evaluation of LLM-based personas. Bringing together researchers, designers, and practitioners, we will explore best practices, develop ethical guidelines, and promote frameworks that ensure transparency, inclusivity, and user-centered interactions. By addressing these challenges collaboratively, we seek to shape the future of LLM-driven CUIs in ways that align with societal values and expectations.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Evaluating Cultural and Social Awareness of LLM Web Agents [113.49968423990616]
We introduce CASA, a benchmark designed to assess large language models' sensitivity to cultural and social norms.
Our approach evaluates LLM agents' ability to detect and appropriately respond to norm-violating user queries and observations.
Experiments show that current LLMs perform significantly better in non-agent environments.
arXiv Detail & Related papers (2024-10-30T17:35:44Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics [0.0]
Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions.
Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations.
arXiv Detail & Related papers (2024-07-03T07:59:52Z) - Navigating LLM Ethics: Advancements, Challenges, and Future Directions [5.023563968303034]
This study addresses ethical issues surrounding Large Language Models (LLMs) within the field of artificial intelligence.
It explores the common ethical challenges posed by both LLMs and other AI systems.
It highlights challenges such as hallucination, verifiable accountability, and decoding censorship complexity.
arXiv Detail & Related papers (2024-05-14T15:03:05Z) - Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models [2.5200794639628032]
This research develops advanced methodologies for Large Language Models (LLMs) to better manage linguistic behaviors related to emotions and ethics.
We introduce DIKE, an adversarial framework that enhances the LLMs' ability to internalize and reflect global human values.
arXiv Detail & Related papers (2024-05-11T19:26:00Z) - Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach [45.74830585715129]
We suggest extending the Social Transparency (ST) framework to address the risks of social misattributions in Large Language Models (LLMs)
LLMs may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors.
We propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users.
arXiv Detail & Related papers (2024-03-26T17:02:42Z) - AntEval: Evaluation of Social Interaction Competencies in LLM-Driven
Agents [65.16893197330589]
Large Language Models (LLMs) have demonstrated their ability to replicate human behaviors across a wide range of scenarios.
However, their capability in handling complex, multi-character social interactions has yet to be fully explored.
We introduce the Multi-Agent Interaction Evaluation Framework (AntEval), encompassing a novel interaction framework and evaluation methods.
arXiv Detail & Related papers (2024-01-12T11:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.