Investigating and Designing for Trust in AI-powered Code Generation Tools
- URL: http://arxiv.org/abs/2305.11248v2
- Date: Tue, 28 May 2024 00:22:00 GMT
- Title: Investigating and Designing for Trust in AI-powered Code Generation Tools
- Authors: Ruotong Wang, Ruijia Cheng, Denae Ford, Thomas Zimmermann,
- Abstract summary: We interviewed developers to understand their challenges in building appropriate trust in AI code generation tools.
We conducted a design probe study to explore design concepts that support developers' trust-building process.
These findings inform our proposed design recommendations on how to design for trust in AI-powered code generation tools.
- Score: 15.155301866886647
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI-powered code generation tools such as GitHub Copilot become popular, it is crucial to understand software developers' trust in AI tools -- a key factor for tool adoption and responsible usage. However, we know little about how developers build trust with AI, nor do we understand how to design the interface of generative AI systems to facilitate their appropriate levels of trust. In this paper, we describe findings from a two-stage qualitative investigation. We first interviewed 17 developers to contextualize their notions of trust and understand their challenges in building appropriate trust in AI code generation tools. We surfaced three main challenges -- including building appropriate expectations, configuring AI tools, and validating AI suggestions. To address these challenges, we conducted a design probe study in the second stage to explore design concepts that support developers' trust-building process by 1) communicating AI performance to help users set proper expectations, 2) allowing users to configure AI by setting and adjusting preferences, and 3) offering indicators of model mechanism to support evaluation of AI suggestions. We gathered developers' feedback on how these design concepts can help them build appropriate trust in AI-powered code generation tools, as well as potential risks in design. These findings inform our proposed design recommendations on how to design for trust in AI-powered code generation tools.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - "I Don't Use AI for Everything": Exploring Utility, Attitude, and Responsibility of AI-empowered Tools in Software Development [19.851794567529286]
This study investigates the adoption, impact, and security considerations of AI-empowered tools in the software development process.
Our findings reveal widespread adoption of AI tools across various stages of software development.
arXiv Detail & Related papers (2024-09-20T09:17:10Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns [23.867795468379743]
Recent research has demonstrated that AI-generated code can contain security issues.
How software professionals balance AI assistant usage and security remains unclear.
This paper investigates how software professionals use AI assistants in secure software development.
arXiv Detail & Related papers (2024-05-10T10:13:19Z) - Custom Developer GPT for Ethical AI Solutions [1.2691047660244337]
This project aims to create a custom Generative Pre-trained Transformer (GPT) for developers to discuss and solve ethical issues through AI engineering.
The use of such a tool can allow practitioners to engineer AI solutions which meet legal requirements and satisfy diverse ethical perspectives.
arXiv Detail & Related papers (2024-01-19T20:21:46Z) - Finding differences in perspectives between designers and engineers to
develop trustworthy AI for autonomous cars [0.0]
Different perspectives exist regarding developing trustworthy AI for autonomous cars.
This study sheds light on the differences in perspectives and provides recommendations to minimize such divergences.
arXiv Detail & Related papers (2023-07-01T08:28:34Z) - Human-centered trust framework: An HCI perspective [1.6344851071810074]
The rationale of this work is based on the current user trust discourse of Artificial Intelligence (AI)
We propose a framework to guide non-experts to unlock the full potential of user trust in AI design.
arXiv Detail & Related papers (2023-05-05T06:15:32Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.