Automatic Generation of Explainability Requirements and Software Explanations From User Reviews
- URL: http://arxiv.org/abs/2507.07344v1
- Date: Thu, 10 Jul 2025 00:03:36 GMT
- Title: Automatic Generation of Explainability Requirements and Software Explanations From User Reviews
- Authors: Martin Obaidi, Jannik Fischbach, Jakob Droste, Hannah Deters, Marc Herrmann, Jil Klünder, Steffen Krätzig, Hugo Villamizar, Kurt Schneider,
- Abstract summary: This work contributes to the advancement of explainability requirements in software systems by introducing an automated approach to derive requirements from user reviews and generate corresponding explanations.<n>We created a dataset of 58 user reviews, each annotated with manually crafted explainability requirements and explanations.<n>Our evaluation shows that while AI-generated requirements often lack relevance and correctness compared to human-created ones, the AI-generated explanations are frequently preferred for their clarity and style.
- Score: 2.2392379251177696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability has become a crucial non-functional requirement to enhance transparency, build user trust, and ensure regulatory compliance. However, translating explanation needs expressed in user feedback into structured requirements and corresponding explanations remains challenging. While existing methods can identify explanation-related concerns in user reviews, there is no established approach for systematically deriving requirements and generating aligned explanations. To contribute toward addressing this gap, we introduce a tool-supported approach that automates this process. To evaluate its effectiveness, we collaborated with an industrial automation manufacturer to create a dataset of 58 user reviews, each annotated with manually crafted explainability requirements and explanations. Our evaluation shows that while AI-generated requirements often lack relevance and correctness compared to human-created ones, the AI-generated explanations are frequently preferred for their clarity and style. Nonetheless, correctness remains an issue, highlighting the importance of human validation. This work contributes to the advancement of explainability requirements in software systems by (1) introducing an automated approach to derive requirements from user reviews and generate corresponding explanations, (2) providing empirical insights into the strengths and limitations of automatically generated artifacts, and (3) releasing a curated dataset to support future research on the automatic generation of explainability requirements.
Related papers
- Leveraging LLMs for Formal Software Requirements -- Challenges and Prospects [0.0]
VERIFAI1 aims to investigate automated and semi-automated approaches to bridge this gap.<n>This position paper presents a preliminary synthesis of relevant literature to identify recurring challenges and prospective research directions.
arXiv Detail & Related papers (2025-07-18T19:15:50Z) - Explainability as a Compliance Requirement: What Regulated Industries Need from AI Tools for Design Artifact Generation [0.7874708385247352]
We investigate the explainability gap in AI-driven design artifact generation through semistructured interviews with ten practitioners from safety-critical industries.<n>Our findings reveal that non-explainable AI outputs necessitate extensive manual validation, reduce stakeholder trust, struggle to handle domain-specific terminology, disrupt team collaboration, and introduce regulatory compliance risks.<n>This study outlines a practical roadmap for improving the transparency, reliability, and applicability of AI tools in requirements engineering.
arXiv Detail & Related papers (2025-07-12T09:34:39Z) - Hybrid Reasoning for Perception, Explanation, and Autonomous Action in Manufacturing [0.0]
CIPHER is a vision-language-action (VLA) model framework aiming to replicate human-like reasoning for industrial control.<n>It integrates a process expert, a regression model enabling quantitative characterization of system states.<n>It interprets visual or textual inputs from process monitoring, explains its decisions, and autonomously generates precise machine instructions.
arXiv Detail & Related papers (2025-06-10T05:37:33Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - RECOVER: Toward Requirements Generation from Stakeholders' Conversations [10.706772429994384]
This paper introduces RECOVER, a novel conversational requirements engineering approach.<n>It supports practitioners in automatically extracting system requirements from stakeholder interactions.<n> Empirical evaluation shows promising performance, with generated requirements demonstrating satisfactory correctness, completeness, and actionability.
arXiv Detail & Related papers (2024-11-29T08:52:40Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Representing Timed Automata and Timing Anomalies of Cyber-Physical
Production Systems in Knowledge Graphs [51.98400002538092]
This paper aims to improve model-based anomaly detection in CPPS by combining the learned timed automaton with a formal knowledge graph about the system.
Both the model and the detected anomalies are described in the knowledge graph in order to allow operators an easier interpretation of the model and the detected anomalies.
arXiv Detail & Related papers (2023-08-25T15:25:57Z) - A Methodology and Software Architecture to Support
Explainability-by-Design [0.0]
This paper describes Explainability-by-Design, a holistic methodology characterised by proactive measures to include explanation capability in the design of decision-making systems.
The methodology consists of three phases: (A) Explanation Requirement Analysis, (B) Explanation Technical Design, and (C) Explanation Validation.
It was shown that the approach is tractable in terms of development time, which can be as low as two hours per sentence.
arXiv Detail & Related papers (2022-06-13T15:34:29Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Textual Explanations and Critiques in Recommendation Systems [8.406549970145846]
dissertation focuses on two fundamental challenges of addressing this need.
The first involves explanation generation in a scalable and data-driven manner.
The second challenge consists in making explanations actionable, and we refer to it as critiquing.
arXiv Detail & Related papers (2022-05-15T11:59:23Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.