Interactive AI and Human Behavior: Challenges and Pathways for AI Governance
- URL: http://arxiv.org/abs/2508.16608v1
- Date: Tue, 12 Aug 2025 19:15:35 GMT
- Title: Interactive AI and Human Behavior: Challenges and Pathways for AI Governance
- Authors: Yulu Pi, Cagatay Turkay, Daniel Bogiatzis-Gibbons,
- Abstract summary: Generative AI systems increasingly engage in long-term, personal, and relational interactions.<n>These Interactive AI systems adapt to users over time, build ongoing relationships, and even can take proactive actions on behalf of users.<n>This new paradigm requires us to rethink how such human-AI interactions can be studied effectively to inform governance and policy development.
- Score: 3.799233413990495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As Generative AI systems increasingly engage in long-term, personal, and relational interactions, human-AI engagements are becoming significantly complex, making them more challenging to understand and govern. These Interactive AI systems adapt to users over time, build ongoing relationships, and even can take proactive actions on behalf of users. This new paradigm requires us to rethink how such human-AI interactions can be studied effectively to inform governance and policy development. In this paper, we draw on insights from a collaborative interdisciplinary workshop with policymakers, behavioral scientists, Human-Computer Interaction researchers, and civil society practitioners, to identify challenges and methodological opportunities arising within new forms of human-AI interactions. Based on these insights, we discuss an outcome-focused regulatory approach that integrates behavioral insights to address both the risks and benefits of emerging human-AI relationships. In particular, we emphasize the need for new methods to study the fluid, dynamic, and context-dependent nature of these interactions. We provide practical recommendations for developing human-centric AI governance, informed by behavioral insights, that can respond to the complexities of Interactive AI systems.
Related papers
- Human-AI Interaction Alignment: Designing, Evaluating, and Evolving Value-Centered AI For Reciprocal Human-AI Futures [27.995784716141767]
The rapid integration of generative AI into everyday life underscores the need to move beyond unidirectional alignment models.<n>This workshop focuses on bidirectional human-AI alignment, a dynamic, reciprocal process where humans and AI co-adapt through interaction, evaluation, and value-centered design.
arXiv Detail & Related papers (2025-12-25T07:45:38Z) - Classifying Epistemic Relationships in Human-AI Interaction: An Exploratory Approach [0.6906005491572401]
This study examines how users form relationships with AI-how they assess, trust, and collaborate with it in research and teaching contexts.<n>Based on 31 interviews with academics across disciplines, we developed a five-part codebook and identified five relationship types.
arXiv Detail & Related papers (2025-08-02T23:41:28Z) - Aligning Generalisation Between Humans and Machines [74.120848518198]
AI technology can support humans in scientific discovery and forming decisions, but may also disrupt democracies and target individuals.<n>The responsible use of AI and its participation in human-AI teams increasingly shows the need for AI alignment.<n>A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Position: Towards Bidirectional Human-AI Alignment [109.57781720848669]
We argue that the research community should explicitly define and critically reflect on "alignment" to account for the bidirectional and dynamic relationship between humans and AI.<n>We introduce the Bidirectional Human-AI Alignment framework, which not only incorporates traditional efforts to align AI with human values but also introduces the critical, underexplored dimension of aligning humans with AI.
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Towards interactive evaluations for interaction harms in human-AI systems [8.989911701384788]
We propose a shift towards evaluation based on textitinteractional ethics, which focuses on textitinteraction harms<n>First, we discuss the limitations of current evaluation methods, which (1) are static, (2) assume a universal user experience, and (3) have limited construct validity.<n>We present practical principles for designing interactive evaluations. These include ecologically valid interaction scenarios, human impact metrics, and diverse human participation approaches.
arXiv Detail & Related papers (2024-05-17T08:49:34Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - A Mental-Model Centric Landscape of Human-AI Symbiosis [31.14516396625931]
We introduce a significantly general version of human-aware AI interaction scheme, called generalized human-aware interaction (GHAI)
We will see how this new framework allows us to capture the various works done in the space of human-AI interaction and identify the fundamental behavioral patterns supported by these works.
arXiv Detail & Related papers (2022-02-18T22:08:08Z) - Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions [46.87576410532481]
We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
arXiv Detail & Related papers (2021-01-17T16:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.