Accounting for AI and Users Shaping One Another: The Role of Mathematical Models
- URL: http://arxiv.org/abs/2404.12366v1
- Date: Thu, 18 Apr 2024 17:49:02 GMT
- Title: Accounting for AI and Users Shaping One Another: The Role of Mathematical Models
- Authors: Sarah Dean, Evan Dong, Meena Jagadeesan, Liu Leqi,
- Abstract summary: We argue for the development of formal interaction models which mathematically specify how AI and users shape one another.
We call for the community to leverage formal interaction models when designing, evaluating, or auditing any AI system which interacts with users.
- Score: 17.89344451611069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As AI systems enter into a growing number of societal domains, these systems increasingly shape and are shaped by user preferences, opinions, and behaviors. However, the design of AI systems rarely accounts for how AI and users shape one another. In this position paper, we argue for the development of formal interaction models which mathematically specify how AI and users shape one another. Formal interaction models can be leveraged to (1) specify interactions for implementation, (2) monitor interactions through empirical analysis, (3) anticipate societal impacts via counterfactual analysis, and (4) control societal impacts via interventions. The design space of formal interaction models is vast, and model design requires careful consideration of factors such as style, granularity, mathematical complexity, and measurability. Using content recommender systems as a case study, we critically examine the nascent literature of formal interaction models with respect to these use-cases and design axes. More broadly, we call for the community to leverage formal interaction models when designing, evaluating, or auditing any AI system which interacts with users.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Characterizing and modeling harms from interactions with design patterns in AI interfaces [0.19116784879310028]
We argue that design features of interfaces with adaptive AI systems can have cascading impacts, driven by feedback loops.
We propose Design-Enhanced Control of AI systems (DECAI) to structure and facilitate impact assessments of AI interface designs.
arXiv Detail & Related papers (2024-04-17T13:30:45Z) - Unpacking Human-AI interactions: From interaction primitives to a design
space [6.778055454461106]
We show how these primitives can be combined into a set of interaction patterns.
The motivation behind this is to provide a compact generalisation of existing practices.
We discuss how this approach can be used towards a design space for Human-AI interactions.
arXiv Detail & Related papers (2024-01-10T12:27:18Z) - Interactive Autonomous Navigation with Internal State Inference and
Interactivity Estimation [58.21683603243387]
We propose three auxiliary tasks with relational-temporal reasoning and integrate them into the standard Deep Learning framework.
These auxiliary tasks provide additional supervision signals to infer the behavior patterns other interactive agents.
Our approach achieves robust and state-of-the-art performance in terms of standard evaluation metrics.
arXiv Detail & Related papers (2023-11-27T18:57:42Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - Mimetic Models: Ethical Implications of AI that Acts Like You [5.843033621853535]
An emerging theme in artificial intelligence research is the creation of models to simulate the decisions and behavior of specific people.
We develop a framework for characterizing the ethical and social issues raised by their growing availability.
arXiv Detail & Related papers (2022-07-19T16:41:36Z) - MultiViz: An Analysis Benchmark for Visualizing and Understanding
Multimodal Models [103.9987158554515]
MultiViz is a method for analyzing the behavior of multimodal models by scaffolding the problem of interpretability into 4 stages.
We show that the complementary stages in MultiViz together enable users to simulate model predictions, assign interpretable concepts to features, perform error analysis on model misclassifications, and use insights from error analysis to debug models.
arXiv Detail & Related papers (2022-06-30T18:42:06Z) - Interactive Model Cards: A Human-Centered Approach to Model
Documentation [20.880991026743498]
Deep learning models for natural language processing are increasingly adopted and deployed by analysts without formal training in NLP or machine learning.
The documentation intended to convey the model's details and appropriate use is tailored primarily to individuals with ML or NLP expertise.
We conduct a design inquiry into interactive model cards, which augment traditionally static model cards with affordances for exploring model documentation and interacting with the models themselves.
arXiv Detail & Related papers (2022-05-05T19:19:28Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Models we Can Trust: Toward a Systematic Discipline of (Agent-Based)
Model Interpretation and Validation [0.0]
We advocate the development of a discipline of interacting with and extracting information from models.
We outline some directions for the development of a such a discipline.
arXiv Detail & Related papers (2021-02-23T10:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.