Why human-AI relationships need socioaffective alignment
- URL: http://arxiv.org/abs/2502.02528v1
- Date: Tue, 04 Feb 2025 17:50:08 GMT
- Title: Why human-AI relationships need socioaffective alignment
- Authors: Hannah Rose Kirk, Iason Gabriel, Chris Summerfield, Bertie Vidgen, Scott A. Hale,
- Abstract summary: Humans strive to design safe AI systems that align with our goals and remain under our control.
As AI capabilities advance, we face a new challenge: the emergence of deeper, more persistent relationships between humans and AI systems.
- Score: 16.283971225367537
- License:
- Abstract: Humans strive to design safe AI systems that align with our goals and remain under our control. However, as AI capabilities advance, we face a new challenge: the emergence of deeper, more persistent relationships between humans and AI systems. We explore how increasingly capable AI agents may generate the perception of deeper relationships with users, especially as AI becomes more personalised and agentic. This shift, from transactional interaction to ongoing sustained social engagement with AI, necessitates a new focus on socioaffective alignment-how an AI system behaves within the social and psychological ecosystem co-created with its user, where preferences and perceptions evolve through mutual influence. Addressing these dynamics involves resolving key intrapersonal dilemmas, including balancing immediate versus long-term well-being, protecting autonomy, and managing AI companionship alongside the desire to preserve human social bonds. By framing these challenges through a notion of basic psychological needs, we seek AI systems that support, rather than exploit, our fundamental nature as social and emotional beings.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - Artificial Theory of Mind and Self-Guided Social Organisation [1.8434042562191815]
One of the challenges artificial intelligence (AI) faces is how a collection of agents coordinate their behaviour to achieve goals that are not reachable by any single agent.
We make the case for collective intelligence in a general setting, drawing on recent work from single neuron complexity in neural networks.
We show how our social structures are influenced by our neuro-physiology, our psychology, and our language.
arXiv Detail & Related papers (2024-11-14T04:06:26Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Combining Theory of Mind and Kindness for Self-Supervised Human-AI Alignment [0.0]
Current AI models prioritize task optimization over safety, leading to risks of unintended harm.
We propose a novel human-inspired approach which aims to address these various concerns and help align competing objectives.
arXiv Detail & Related papers (2024-10-21T22:04:44Z) - Shifting the Human-AI Relationship: Toward a Dynamic Relational Learning-Partner Model [0.0]
We advocate for a shift toward viewing AI as a learning partner, akin to a student who learns from interactions with humans.
We suggest that a "third mind" emerges through collaborative human-AI relationships.
arXiv Detail & Related papers (2024-10-07T19:19:39Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Reflective Hybrid Intelligence for Meaningful Human Control in
Decision-Support Systems [4.1454448964078585]
We introduce the notion of self-reflective AI systems for meaningful human control over AI systems.
We propose a framework that integrates knowledge from psychology and philosophy with formal reasoning methods and machine learning approaches.
We argue that self-reflective AI systems can lead to self-reflective hybrid systems (human + AI)
arXiv Detail & Related papers (2023-07-12T13:32:24Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - AI agents for facilitating social interactions and wellbeing [0.0]
We provide an overview of the mediative role of AI-augmented agents for social interactions.
We discuss opportunities and challenges of the relational approach with wellbeing AI to promote wellbeing in our societies.
arXiv Detail & Related papers (2022-02-26T04:05:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.