Farsight: Fostering Responsible AI Awareness During AI Application Prototyping
- URL: http://arxiv.org/abs/2402.15350v2
- Date: Tue, 2 Jul 2024 06:12:05 GMT
- Title: Farsight: Fostering Responsible AI Awareness During AI Application Prototyping
- Authors: Zijie J. Wang, Chinmay Kulkarni, Lauren Wilcox, Michael Terry, Michael Madaio,
- Abstract summary: We present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping.
Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms.
We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers.
- Score: 32.235398722593544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prompt-based interfaces for Large Language Models (LLMs) have made prototyping and building AI-powered applications easier than ever before. However, identifying potential harms that may arise from AI applications remains a challenge, particularly during prompt-based prototyping. To address this, we present Farsight, a novel in situ interactive tool that helps people identify potential harms from the AI applications they are prototyping. Based on a user's prompt, Farsight highlights news articles about relevant AI incidents and allows users to explore and edit LLM-generated use cases, stakeholders, and harms. We report design insights from a co-design study with 10 AI prototypers and findings from a user study with 42 AI prototypers. After using Farsight, AI prototypers in our user study are better able to independently identify potential harms associated with a prompt and find our tool more useful and usable than existing resources. Their qualitative feedback also highlights that Farsight encourages them to focus on end-users and think beyond immediate harms. We discuss these findings and reflect on their implications for designing AI prototyping experiences that meaningfully engage with AI harms. Farsight is publicly accessible at: https://PAIR-code.github.io/farsight.
Related papers
- Survey of User Interface Design and Interaction Techniques in Generative AI Applications [79.55963742878684]
We aim to create a compendium of different user-interaction patterns that can be used as a reference for designers and developers alike.
We also strive to lower the entry barrier for those attempting to learn more about the design of generative AI applications.
arXiv Detail & Related papers (2024-10-28T23:10:06Z) - Not Just Novelty: A Longitudinal Study on Utility and Customization of an AI Workflow [18.15979295351043]
Generative AI brings novel and impressive abilities to help people in everyday tasks.
It is uncertain how useful generative AI are after the novelty wears off.
We conducted a three-week longitudinal study with 12 users to understand the familiarization and customization of generative AI tools for science communication.
arXiv Detail & Related papers (2024-02-15T11:39:11Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Responsible AI: A Design Space Exploration of Human-Centered
Artificial Intelligence User Interfaces to Investigate Fairness [4.40401067183266]
We provide a design space exploration that supports data scientists and domain experts to investigate AI fairness.
Using loan applications as an example, we held a series of workshops with loan officers and data scientists to elicit their requirements.
We instantiated these requirements into FairHIL, a UI to support human-in-the-loop fairness investigations, and describe how this UI could be generalized to other use cases.
arXiv Detail & Related papers (2022-06-01T13:08:37Z) - Towards Involving End-users in Interactive Human-in-the-loop AI Fairness [1.889930012459365]
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications.
Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in making their AI models fairer.
Our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users to identify potential fairness issues.
arXiv Detail & Related papers (2022-04-22T02:24:11Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Structured access to AI capabilities: an emerging paradigm for safe AI
deployment [0.0]
Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems.
Aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely.
arXiv Detail & Related papers (2022-01-13T19:30:16Z) - Artificial Intelligence for UAV-enabled Wireless Networks: A Survey [72.10851256475742]
Unmanned aerial vehicles (UAVs) are considered as one of the promising technologies for the next-generation wireless communication networks.
Artificial intelligence (AI) is growing rapidly nowadays and has been very successful.
We provide a comprehensive overview of some potential applications of AI in UAV-based networks.
arXiv Detail & Related papers (2020-09-24T07:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.