Towards Requirements Engineering for GenAI-Enabled Software: Bridging Responsibility Gaps through Human Oversight Requirements
- URL: http://arxiv.org/abs/2511.13069v1
- Date: Mon, 17 Nov 2025 07:14:01 GMT
- Title: Towards Requirements Engineering for GenAI-Enabled Software: Bridging Responsibility Gaps through Human Oversight Requirements
- Authors: Zhenyu Mao, Jacky Keung, Yicheng Sun, Yifei Wang, Shuo Liu, Jialong Li,
- Abstract summary: generative and adaptive nature complicates how human oversight and responsibility are specified, delegated, and traced.<n>This study aims to analyze these research gaps in the context of GenAI-enabled software systems.
- Score: 12.18822408018955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context: Responsibility gaps, long-recognized challenges in socio-technical systems where accountability becomes diffuse or ambiguous, have become increasingly pronounced in GenAI-enabled software. The generative and adaptive nature complicates how human oversight and responsibility are specified, delegated, and traced. Existing requirements engineering (RE) approaches remain limited in addressing these phenomena, revealing conceptual, methodological, and artifact-level research gaps.. Objective: This study aims to analyze these research gaps in the context of GenAI-enabled software systems. It seeks to establish a coherent perspective for a systematic analysis of responsibility gaps from a human oversight requirements standpoint, encompassing how these responsibility gaps should be conceptualized, identified, and represented throughout the RE process. Methods: The proposed design methodology is structured across three analytical layers. At the conceptualization layer, it establishes a conceptual framing that defines the key elements of responsibility across the human and system dimensions and explains how potential responsibility gaps emerge from their interactions. At the methodological layer, it introduces a deductive pipeline for identifying responsibility gaps by analyzing interactions between these dimensions and deriving corresponding oversight requirements within established RE frameworks. At the artifact layer, it formalizes the results in a Deductive Backbone Table, a reusable representation that traces the reasoning path from responsibility gaps identification to human oversight requirements derivation. Results: A user study compared the proposed methodology with a baseline goal-oriented RE across two scenarios. Evaluation across six dimensions indicated clear improvements of the proposed methodology, confirming its effectiveness in addressing three research gaps.
Related papers
- Dynamics Within Latent Chain-of-Thought: An Empirical Study of Causal Structure [58.89643769707751]
We study latent chain-of-thought as a manipulable causal process in representation space.<n>We find that latent-step budgets behave less like homogeneous extra depth and more like staged functionality with non-local routing.<n>These results motivate mode-conditional and stability-aware analyses as more reliable tools for interpreting and improving latent reasoning systems.
arXiv Detail & Related papers (2026-02-09T15:25:12Z) - The Path Ahead for Agentic AI: Challenges and Opportunities [4.52683540940001]
This chapter examines the emergence of agentic AI systems that operate autonomously in complex environments.<n>We trace the architectural progression from statistical models to transformer-based systems, identifying capabilities that enable agentic behavior.<n>Unlike existing surveys, we focus on the architectural transition from language understanding to autonomous action, emphasizing the technical gaps that must be resolved before deployment.
arXiv Detail & Related papers (2026-01-06T06:31:42Z) - From Educational Analytics to AI Governance: Transferable Lessons from Complex Systems Interventions [0.0]
We argue that five core principles developed within CAPIRE transfer directly to the challenge of governing AI systems.<n>The isomorphism is not merely analogical: both domains exhibit non-linearity, emergence, feedback loops, strategic adaptation, and path dependence.<n>We propose Complex Systems AI Governance (CSAIG) as an integrated framework that operationalises these principles for regulatory design.
arXiv Detail & Related papers (2025-12-15T12:16:57Z) - AI Deception: Risks, Dynamics, and Controls [153.71048309527225]
This project provides a comprehensive and up-to-date overview of the AI deception field.<n>We identify a formal definition of AI deception, grounded in signaling theory from studies of animal deception.<n>We organize the landscape of AI deception research as a deception cycle, consisting of two key components: deception emergence and deception treatment.
arXiv Detail & Related papers (2025-11-27T16:56:04Z) - Understanding Catastrophic Interference: On the Identifibility of Latent Representations [67.05452287233122]
Catastrophic interference, also known as catastrophic forgetting, is a fundamental challenge in machine learning.<n>We propose a novel theoretical framework that formulates catastrophic interference as an identification problem.<n>Our approach provides both theoretical guarantees and practical performance improvements across both synthetic and benchmark datasets.
arXiv Detail & Related papers (2025-09-27T00:53:32Z) - Towards a Framework for Operationalizing the Specification of Trustworthy AI Requirements [1.2184324428571227]
Growing concerns around the trustworthiness of AI-enabled systems highlight the role of requirements engineering (RE)<n>We propose the integration of two complementary approaches: AMDiRE and PerSpecML.
arXiv Detail & Related papers (2025-07-14T12:49:26Z) - KAQG: A Knowledge-Graph-Enhanced RAG for Difficulty-Controlled Question Generation [0.0]
This study introduces Knowledge Augmented Question Generation (KAQG)<n>It integrates Item Response Theory, abbreviated as IRT, Bloom's taxonomy, and knowledge graphs into a multi-agent Retrieval-Augmented Generation system.<n>The proposed approach overcomes limitations of existing methods by enabling fine-grained control over item difficulty, psychometric calibration, and cognitive alignment.
arXiv Detail & Related papers (2025-05-12T14:42:19Z) - A Survey of Frontiers in LLM Reasoning: Inference Scaling, Learning to Reason, and Agentic Systems [93.8285345915925]
Reasoning is a fundamental cognitive process that enables logical inference, problem-solving, and decision-making.<n>With the rapid advancement of large language models (LLMs), reasoning has emerged as a key capability that distinguishes advanced AI systems.<n>We categorize existing methods along two dimensions: (1) Regimes, which define the stage at which reasoning is achieved; and (2) Architectures, which determine the components involved in the reasoning process.
arXiv Detail & Related papers (2025-04-12T01:27:49Z) - Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Attributing Responsibility in AI-Induced Incidents: A Computational Reflective Equilibrium Framework for Accountability [22.498210339327155]
The pervasive integration of Artificial Intelligence (AI) has introduced complex challenges in the responsibility and accountability in the event of incidents involving AI-enabled systems.<n>This work proposes a coherent and ethically acceptable responsibility attribution framework for all stakeholders.
arXiv Detail & Related papers (2024-04-25T18:11:03Z) - Combining Inductive and Deductive Reasoning for Query Answering over
Incomplete Knowledge Graphs [12.852658691077643]
Current methods for embedding-based query answering over incomplete Knowledge Graphs (KGs) only focus on inductive reasoning.
We propose various integration strategies into prominent representatives of embedding models.
The achieved improvements in the setting that requires both inductive and deductive reasoning are from 20% to 55% in HITS@3.
arXiv Detail & Related papers (2021-06-26T16:05:44Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.