Exploring a Behavioral Model of "Positive Friction" in Human-AI
Interaction
- URL: http://arxiv.org/abs/2402.09683v1
- Date: Thu, 15 Feb 2024 03:39:55 GMT
- Title: Exploring a Behavioral Model of "Positive Friction" in Human-AI
Interaction
- Authors: Zeya Chen, Ruth Schmidt
- Abstract summary: This paper first proposes a "positive friction" model that can help characterize how friction is currently beneficial in user and developer experiences with AI.
It then explores this model in the context of AI users and developers by proposing the value of taking a hybrid "AI+human" lens.
- Score: 1.8673970128645236
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Designing seamless, frictionless user experiences has long been a dominant
trend in both applied behavioral science and artificial intelligence (AI), in
which the goal of making desirable actions easy and efficient informs efforts
to minimize friction in user experiences. However, in some settings, friction
can be genuinely beneficial, such as the insertion of deliberate delays to
increase reflection, preventing individuals from resorting to automatic or
biased behaviors, and enhancing opportunities for unexpected discoveries. More
recently, the popularization and availability of AI on a widespread scale has
only increased the need to examine how friction can help or hinder users of AI;
it also suggests a need to consider how positive friction can benefit AI
practitioners, both during development processes (e.g., working with diverse
teams) and to inform how AI is designed into offerings. This paper first
proposes a "positive friction" model that can help characterize how friction is
currently beneficial in user and developer experiences with AI, diagnose the
potential need for friction where it may not yet exist in these contexts, and
inform how positive friction can be used to generate solutions, especially as
advances in AI continue to be progress and new opportunities emerge. It then
explores this model in the context of AI users and developers by proposing the
value of taking a hybrid "AI+human" lens, and concludes by suggesting questions
for further exploration.
Related papers
- Better Slow than Sorry: Introducing Positive Friction for Reliable Dialogue Systems [36.88021317372274]
frictionless dialogue risks fostering uncritical reliance on AI outputs, which can obscure implicit assumptions.
We propose integrating positive friction into conversational AI, which promotes user reflection on goals, critical thinking on system response, and subsequent re-conditioning of AI systems.
arXiv Detail & Related papers (2025-01-28T23:50:02Z) - Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities for critical thinking in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - Not Just Novelty: A Longitudinal Study on Utility and Customization of an AI Workflow [18.15979295351043]
Generative AI brings novel and impressive abilities to help people in everyday tasks.
It is uncertain how useful generative AI are after the novelty wears off.
We conducted a three-week longitudinal study with 12 users to understand the familiarization and customization of generative AI tools for science communication.
arXiv Detail & Related papers (2024-02-15T11:39:11Z) - Social Interaction-Aware Dynamical Models and Decision Making for
Autonomous Vehicles [20.123965317836106]
Interaction-aware Autonomous Driving (IAAD) is a rapidly growing field of research.
It focuses on the development of autonomous vehicles that are capable of interacting safely and efficiently with human road users.
This is a challenging task, as it requires the autonomous vehicle to be able to understand and predict the behaviour of human road users.
arXiv Detail & Related papers (2023-10-29T03:43:50Z) - The Responsible Development of Automated Student Feedback with Generative AI [6.008616775722921]
Recent advancements in AI, particularly with large language models (LLMs), present new opportunities to deliver scalable, repeatable, and instant feedback.
However, implementing these technologies also introduces a host of ethical considerations that must thoughtfully be addressed.
One of the core advantages of AI systems is their ability to automate routine and mundane tasks, potentially freeing up human educators for more nuanced work.
However, the ease of automation risks a tyranny of the majority'', where the diverse needs of minority or unique learners are overlooked.
arXiv Detail & Related papers (2023-08-29T14:29:57Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - A Rubric for Human-like Agents and NeuroAI [2.749726993052939]
Contributed research ranges widely from mimicking behaviour to testing machine learning methods.
It cannot be assumed nor expected that progress on one of these three goals will automatically translate to progress in others.
This is clarified using examples of weak and strong neuroAI and human-like agents.
arXiv Detail & Related papers (2022-12-08T16:59:40Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.