Player-AI Interaction: What Neural Network Games Reveal About AI as Play
- URL: http://arxiv.org/abs/2101.06220v2
- Date: Mon, 18 Jan 2021 10:25:19 GMT
- Title: Player-AI Interaction: What Neural Network Games Reveal About AI as Play
- Authors: Jichen Zhu, Jennifer Villareale, Nithesh Javvaji, Sebastian Risi,
Mathias L\"owe, Rush Weigelt, Casper Harteveld
- Abstract summary: This paper argues that games are an ideal domain for studying and experimenting with how humans interact with AI.
Through a systematic survey of neural network games, we identified the dominant interaction metaphors and AI interaction patterns.
Our work suggests that game and UX designers should consider flow to structure the learning curve of human-AI interaction.
- Score: 14.63311356668699
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The advent of artificial intelligence (AI) and machine learning (ML) bring
human-AI interaction to the forefront of HCI research. This paper argues that
games are an ideal domain for studying and experimenting with how humans
interact with AI. Through a systematic survey of neural network games (n = 38),
we identified the dominant interaction metaphors and AI interaction patterns in
these games. In addition, we applied existing human-AI interaction guidelines
to further shed light on player-AI interaction in the context of AI-infused
systems. Our core finding is that AI as play can expand current notions of
human-AI interaction, which are predominantly productivity-based. In
particular, our work suggests that game and UX designers should consider flow
to structure the learning curve of human-AI interaction, incorporate
discovery-based learning to play around with the AI and observe the
consequences, and offer users an invitation to play to explore new forms of
human-AI interaction.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Shifting the Human-AI Relationship: Toward a Dynamic Relational Learning-Partner Model [0.0]
We advocate for a shift toward viewing AI as a learning partner, akin to a student who learns from interactions with humans.
We suggest that a "third mind" emerges through collaborative human-AI relationships.
arXiv Detail & Related papers (2024-10-07T19:19:39Z) - Explainable Human-AI Interaction: A Planning Perspective [32.477369282996385]
AI systems need to be explainable to the humans in the loop.
We will discuss how the AI agent can use mental models to either conform to human expectations, or change those expectations through explanatory communication.
While the main focus of the book is on cooperative scenarios, we will point out how the same mental models can be used for obfuscation and deception.
arXiv Detail & Related papers (2024-05-19T22:22:21Z) - Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT [10.907980864371213]
This study focuses on playful interactions exhibited by users of a popular AI technology, ChatGPT.
We found that more than half (54%) of user discourse revolved around playful interactions.
It examines how these interactions can help users understand AI's agency, shape human-AI relationships, and provide insights for designing AI systems.
arXiv Detail & Related papers (2024-01-16T14:44:13Z) - Human-AI collaboration is not very collaborative yet: A taxonomy of interaction patterns in AI-assisted decision making from a systematic review [6.013543974938446]
Leveraging Artificial Intelligence in decision support systems has disproportionately focused on technological advancements.
A human-centered perspective attempts to alleviate this concern by designing AI solutions for seamless integration with existing processes.
arXiv Detail & Related papers (2023-10-30T17:46:38Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - eXtended Artificial Intelligence: New Prospects of Human-AI Interaction
Research [8.315174426992087]
The article provides a theoretical treatment and model of human-AI interaction based on an XR-AI continuum.
It shows why the combination of XR and AI fruitfully contributes to a valid and systematic investigation of human-AI interactions and interfaces.
The first experiment reveals an interesting gender effect in human-robot interaction, while the second experiment reveals an Eliza effect of a recommender system.
arXiv Detail & Related papers (2021-03-27T22:12:06Z) - Adversarial Interaction Attack: Fooling AI to Misinterpret Human
Intentions [46.87576410532481]
We show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise.
Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions.
Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
arXiv Detail & Related papers (2021-01-17T16:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.