Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach
- URL: http://arxiv.org/abs/2403.17873v1
- Date: Tue, 26 Mar 2024 17:02:42 GMT
- Title: Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach
- Authors: Andrea Ferrario, Alberto Termine, Alessandro Facchini,
- Abstract summary: We suggest extending the Social Transparency (ST) framework to address the risks of social misattributions in Large Language Models (LLMs)
LLMs may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors.
We propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users.
- Score: 45.74830585715129
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors, cases of epistemic injustice, and unwarranted trust. To address these issues, we propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users. This addition aims to bridge the gap between LLM capabilities and user perceptions, promoting the ethically responsible development and use of LLM-based technology.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - CogErgLLM: Exploring Large Language Model Systems Design Perspective Using Cognitive Ergonomics [0.0]
Integrating cognitive ergonomics with LLMs is crucial for improving safety, reliability, and user satisfaction in human-AI interactions.
Current LLM designs often lack this integration, resulting in systems that may not fully align with human cognitive capabilities and limitations.
arXiv Detail & Related papers (2024-07-03T07:59:52Z) - An Empirical Design Justice Approach to Identifying Ethical Considerations in the Intersection of Large Language Models and Social Robotics [0.31378963995109616]
The integration of Large Language Models (LLMs) in social robotics presents a unique set of ethical challenges and social impacts.
This research is set out to identify ethical considerations that arise in the design and development of these two technologies in combination.
arXiv Detail & Related papers (2024-06-10T15:53:50Z) - The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Socialized Learning: A Survey of the Paradigm Shift for Edge Intelligence in Networked Systems [62.252355444948904]
This paper presents the findings of a literature review on the integration of edge intelligence (EI) and socialized learning (SL)
SL is a learning paradigm predicated on social principles and behaviors, aimed at amplifying the collaborative capacity and collective intelligence of agents.
We elaborate on three integrated components: socialized architecture, socialized training, and socialized inference, analyzing their strengths and weaknesses.
arXiv Detail & Related papers (2024-04-20T11:07:29Z) - SOTOPIA-$π$: Interactive Learning of Socially Intelligent Language Agents [73.35393511272791]
We propose an interactive learning method, SOTOPIA-$pi$, improving the social intelligence of language agents.
This method leverages behavior cloning and self-reinforcement training on filtered social interaction data according to large language model (LLM) ratings.
arXiv Detail & Related papers (2024-03-13T17:17:48Z) - Do LLM Agents Exhibit Social Behavior? [5.094340963261968]
State-Understanding-Value-Action (SUVA) is a framework to systematically analyze responses in social contexts.
It assesses social behavior through both their final decisions and the response generation processes leading to those decisions.
We demonstrate that utterance-based reasoning reliably predicts LLMs' final actions.
arXiv Detail & Related papers (2023-12-23T08:46:53Z) - Voluminous yet Vacuous? Semantic Capital in an Age of Large Language
Models [0.0]
Large Language Models (LLMs) have emerged as transformative forces in the realm of natural language processing, wielding the power to generate human-like text.
This paper explores the evolution, capabilities, and limitations of these models, while highlighting ethical concerns they raise.
arXiv Detail & Related papers (2023-05-29T09:26:28Z) - Training Socially Aligned Language Models on Simulated Social
Interactions [99.39979111807388]
Social alignment in AI systems aims to ensure that these models behave according to established societal values.
Current language models (LMs) are trained to rigidly replicate their training corpus in isolation.
This work presents a novel training paradigm that permits LMs to learn from simulated social interactions.
arXiv Detail & Related papers (2023-05-26T14:17:36Z) - Expanding Explainability: Towards Social Transparency in AI systems [20.41177660318785]
Social Transparency (ST) is a socio-technically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making.
Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
arXiv Detail & Related papers (2021-01-12T19:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.