Interactive AI Alignment: Specification, Process, and Evaluation Alignment
- URL: http://arxiv.org/abs/2311.00710v2
- Date: Mon, 16 Sep 2024 22:54:33 GMT
- Title: Interactive AI Alignment: Specification, Process, and Evaluation Alignment
- Authors: Michael Terry, Chinmay Kulkarni, Martin Wattenberg, Lucas Dixon, Meredith Ringel Morris,
- Abstract summary: Modern AI enables a high-level, declarative form of interaction.
Users describe the intended outcome they wish an AI to produce, but do not actually create the outcome themselves.
- Score: 30.599781014726823
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern AI enables a high-level, declarative form of interaction: Users describe the intended outcome they wish an AI to produce, but do not actually create the outcome themselves. In contrast, in traditional user interfaces, users invoke specific operations to create the desired outcome. This paper revisits the basic input-output interaction cycle in light of this declarative style of interaction, and connects concepts in AI alignment to define three objectives for interactive alignment of AI: specification alignment (aligning on what to do), process alignment (aligning on how to do it), and evaluation alignment (assisting users in verifying and understanding what was produced). Using existing systems as examples, we show how these user-centered views of AI alignment can be used descriptively, prescriptively, and as an evaluative aid.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Constraining Participation: Affordances of Feedback Features in Interfaces to Large Language Models [49.74265453289855]
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces.
This paper examines the affordances of interactive feedback features in ChatGPT's interface, analysing how they shape user input and participation in iteration.
arXiv Detail & Related papers (2024-08-27T13:50:37Z) - Measuring User Understanding in Dialogue-based XAI Systems [2.4124106640519667]
State-of-the-art in XAI is still characterized by one-shot, non-personalized and one-way explanations.
In this paper, we measure understanding of users in three phases by asking them to simulate the predictions of the model they are learning about.
We analyze the data to reveal patterns of how the interaction between groups with high vs. low understanding gain differ.
arXiv Detail & Related papers (2024-08-13T15:17:03Z) - Interrogating AI: Characterizing Emergent Playful Interactions with ChatGPT [10.907980864371213]
This study focuses on playful interactions exhibited by users of a popular AI technology, ChatGPT.
We found that more than half (54%) of user discourse revolved around playful interactions.
It examines how these interactions can help users understand AI's agency, shape human-AI relationships, and provide insights for designing AI systems.
arXiv Detail & Related papers (2024-01-16T14:44:13Z) - An Interactive UI to Support Sensemaking over Collections of Parallel
Texts [15.401895433726558]
With a large corpus of papers, it's cognitively demanding to pairwise compare and contrast them all with each other.
We present AVTALER, which combines peoples' unique skills, contextual awareness, and knowledge, together with the strength of automation.
arXiv Detail & Related papers (2023-03-11T01:04:25Z) - AI Alignment Dialogues: An Interactive Approach to AI Alignment in
Support Agents [3.0731004832223796]
This paper proposes a new approach to AI alignment: alignment dialogues with which users and agents try to achieve and maintain alignment via interaction.
We argue that alignment dialogues have a number of advantages in comparison to data-driven approaches.
The advantages of alignment dialogues include allowing the users to directly convey higher-level concepts to the agent, and making the agent more transparent and trustworthy.
arXiv Detail & Related papers (2023-01-16T13:19:53Z) - Evaluating Human-Language Model Interaction [79.33022878034627]
We develop a new framework, Human-AI Language-based Interaction Evaluation (HALIE), that defines the components of interactive systems.
We design five tasks to cover different forms of interaction: social dialogue, question answering, crossword puzzles, summarization, and metaphor generation.
We find that better non-interactive performance does not always translate to better human-LM interaction.
arXiv Detail & Related papers (2022-12-19T18:59:45Z) - User Response and Sentiment Prediction for Automatic Dialogue Evaluation [69.11124655437902]
We propose to use the sentiment of the next user utterance for turn or dialog level evaluation.
Experiments show our model outperforming existing automatic evaluation metrics on both written and spoken open-domain dialogue datasets.
arXiv Detail & Related papers (2021-11-16T22:19:17Z) - Optimizing Interactive Systems via Data-Driven Objectives [70.3578528542663]
We propose an approach that infers the objective directly from observed user interactions.
These inferences can be made regardless of prior knowledge and across different types of user behavior.
We introduce Interactive System (ISO), a novel algorithm that uses these inferred objectives for optimization.
arXiv Detail & Related papers (2020-06-19T20:49:14Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.