Unpacking Human-AI interactions: From interaction primitives to a design
space
- URL: http://arxiv.org/abs/2401.05115v1
- Date: Wed, 10 Jan 2024 12:27:18 GMT
- Title: Unpacking Human-AI interactions: From interaction primitives to a design
space
- Authors: Kostas Tsiakas and Dave Murray-Rust
- Abstract summary: We show how these primitives can be combined into a set of interaction patterns.
The motivation behind this is to provide a compact generalisation of existing practices.
We discuss how this approach can be used towards a design space for Human-AI interactions.
- Score: 6.778055454461106
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper aims to develop a semi-formal design space for Human-AI
interactions, by building a set of interaction primitives which specify the
communication between users and AI systems during their interaction. We show
how these primitives can be combined into a set of interaction patterns which
can provide an abstract specification for exchanging messages between humans
and AI/ML models to carry out purposeful interactions. The motivation behind
this is twofold: firstly, to provide a compact generalisation of existing
practices, that highlights the similarities and differences between systems in
terms of their interaction behaviours; and secondly, to support the creation of
new systems, in particular by opening the space of possibilities for
interactions with models. We present a short literature review on frameworks,
guidelines and taxonomies related to the design and implementation of HAI
interactions, including human-in-the-loop, explainable AI, as well as hybrid
intelligence and collaborative learning approaches. From the literature review,
we define a vocabulary for describing information exchanges in terms of
providing and requesting particular model-specific data types. Based on this
vocabulary, a message passing model for interactions between humans and models
is presented, which we demonstrate can account for existing systems and
approaches. Finally, we build this into design patterns as mid-level constructs
that capture common interactional structures. We discuss how this approach can
be used towards a design space for Human-AI interactions that creates new
possibilities for designs as well as keeping track of implementation issues and
concerns.
Related papers
- Relation Learning and Aggregate-attention for Multi-person Motion Prediction [13.052342503276936]
Multi-person motion prediction considers not just the skeleton structures or human trajectories but also the interactions between others.
Previous methods often overlook that the joints relations within an individual (intra-relation) and interactions among groups (inter-relation) are distinct types of representations.
We introduce a new collaborative framework for multi-person motion prediction that explicitly modeling these relations.
arXiv Detail & Related papers (2024-11-06T07:48:30Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Accounting for AI and Users Shaping One Another: The Role of Mathematical Models [17.89344451611069]
We argue for the development of formal interaction models which mathematically specify how AI and users shape one another.
We call for the community to leverage formal interaction models when designing, evaluating, or auditing any AI system which interacts with users.
arXiv Detail & Related papers (2024-04-18T17:49:02Z) - THOR: Text to Human-Object Interaction Diffusion via Relation Intervention [51.02435289160616]
We propose a novel Text-guided Human-Object Interaction diffusion model with Relation Intervention (THOR)
In each diffusion step, we initiate text-guided human and object motion and then leverage human-object relations to intervene in object motion.
We construct Text-BEHAVE, a Text2HOI dataset that seamlessly integrates textual descriptions with the currently largest publicly available 3D HOI dataset.
arXiv Detail & Related papers (2024-03-17T13:17:25Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Interactive Natural Language Processing [67.87925315773924]
Interactive Natural Language Processing (iNLP) has emerged as a novel paradigm within the field of NLP.
This paper offers a comprehensive survey of iNLP, starting by proposing a unified definition and framework of the concept.
arXiv Detail & Related papers (2023-05-22T17:18:29Z) - Collective Relational Inference for learning heterogeneous interactions [8.215734914005845]
We propose a novel probabilistic method for relational inference, which possesses two distinctive characteristics compared to existing methods.
We evaluate the proposed methodology across several benchmark datasets and demonstrate that it outperforms existing methods in accurately inferring interaction types.
Overall the proposed model is data-efficient and generalizable to large systems when trained on smaller ones.
arXiv Detail & Related papers (2023-04-30T19:45:04Z) - A Model for Intelligible Interaction Between Agents That Predict and Explain [1.335664823620186]
We formalise the interaction model by taking agents to be automata with some special characteristics.
We define One- and Two-Way Intelligibility as properties that emerge at run-time by execution of the protocol.
We demonstrate using the formal model to: (a) identify instances of One- and Two-Way Intelligibility in literature reports on humans interacting with ML systems providing logic-based explanations, as is done in Inductive Logic Programming (ILP); and (b) map interactions between humans and machines in an elaborate natural-language based dialogue-model to One- or Two-Way Intellig
arXiv Detail & Related papers (2023-01-04T20:48:22Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.