Investigating Explainability of Generative AI for Code through
Scenario-based Design
- URL: http://arxiv.org/abs/2202.04903v1
- Date: Thu, 10 Feb 2022 08:52:39 GMT
- Title: Investigating Explainability of Generative AI for Code through
Scenario-based Design
- Authors: Jiao Sun, Q. Vera Liao, Michael Muller, Mayank Agarwal, Stephanie
Houde, Kartik Talamadupula, Justin D. Weisz
- Abstract summary: generative AI (GenAI) technologies are maturing and being applied to application domains such as software engineering.
We conduct 9 workshops with 43 software engineers in which real examples from state-of-the-art generative AI models were used to elicit users' explainability needs.
Our work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.
- Score: 44.44517254181818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: What does it mean for a generative AI model to be explainable? The emergent
discipline of explainable AI (XAI) has made great strides in helping people
understand discriminative models. Less attention has been paid to generative
models that produce artifacts, rather than decisions, as output. Meanwhile,
generative AI (GenAI) technologies are maturing and being applied to
application domains such as software engineering. Using scenario-based design
and question-driven XAI design approaches, we explore users' explainability
needs for GenAI in three software engineering use cases: natural language to
code, code translation, and code auto-completion. We conducted 9 workshops with
43 software engineers in which real examples from state-of-the-art generative
AI models were used to elicit users' explainability needs. Drawing from prior
work, we also propose 4 types of XAI features for GenAI for code and gathered
additional design ideas from participants. Our work explores explainability
needs for GenAI for code and demonstrates how human-centered approaches can
drive the technical development of XAI in novel domains.
Related papers
- Model-based Maintenance and Evolution with GenAI: A Look into the Future [47.93555901495955]
We argue that Generative Artificial Intelligence (GenAI) can be used as a means to address the limitations of Model-Based Engineering (MBM&E)
We propose that GenAI can be used in MBM&E for: reducing engineers' learning curve, maximizing efficiency with recommendations, or serving as a reasoning tool to understand domain problems.
arXiv Detail & Related papers (2024-07-09T23:13:26Z) - Legal Aspects for Software Developers Interested in Generative AI Applications [5.772982243103395]
Generative Artificial Intelligence (GenAI) has led to new technologies capable of generating high-quality code, natural language, and images.
The next step is to integrate GenAI technology into products, a task typically conducted by software developers.
This article sheds light on the current state of two such risks: data protection and copyright.
arXiv Detail & Related papers (2024-04-25T14:17:34Z) - Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda [1.8592384822257952]
We elaborate on why XAI has gained importance with the rise of GenAI and its challenges for explainability research.
We also unveil novel and emerging desiderata that explanations should fulfill, covering aspects such as verifiability, interactivity, security, and cost.
arXiv Detail & Related papers (2024-04-15T08:18:16Z) - Designerly Understanding: Information Needs for Model Transparency to
Support Design Ideation for AI-Powered User Experience [42.73738624139124]
Designers face hurdles understanding AI technologies, such as pre-trained language models, as design materials.
This limits their ability to ideate and make decisions about whether, where, and how to use AI.
Our study highlights the pivotal role that UX designers can play in Responsible AI.
arXiv Detail & Related papers (2023-02-21T02:06:24Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Human-Centered Explainable AI (XAI): From Algorithms to User Experiences [29.10123472973571]
explainable AI (XAI) has produced a vast collection of algorithms in recent years.
The field is starting to embrace inter-disciplinary perspectives and human-centered approaches.
arXiv Detail & Related papers (2021-10-20T21:33:46Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.