InSight-R: A Framework for Risk-informed Human Failure Event Identification and Interface-Induced Risk Assessment Driven by AutoGraph
- URL: http://arxiv.org/abs/2507.00066v1
- Date: Sat, 28 Jun 2025 02:04:06 GMT
- Title: InSight-R: A Framework for Risk-informed Human Failure Event Identification and Interface-Induced Risk Assessment Driven by AutoGraph
- Authors: Xingyu Xiao, Jiejuan Tong, Peng Chen, Jun Sun, Zhe Sui, Jingang Liang, Hongru Zhao, Jun Zhao, Haitao Wang,
- Abstract summary: Human reliability remains a critical concern in safety-critical domains such as nuclear power.<n>Current methods rely heavily on expert judgment for identifying human failure events (HFEs) and assigning performance influencing factors (PIFs)<n>This study proposes a framework for risk-informed human failure event identification and interface risk assessment driven by AutoGraph (InSight-R)
- Score: 9.484700902829578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human reliability remains a critical concern in safety-critical domains such as nuclear power, where operational failures are often linked to human error. While conventional human reliability analysis (HRA) methods have been widely adopted, they rely heavily on expert judgment for identifying human failure events (HFEs) and assigning performance influencing factors (PIFs). This reliance introduces challenges related to reproducibility, subjectivity, and limited integration of interface-level data. In particular, current approaches lack the capacity to rigorously assess how human-machine interface design contributes to operator performance variability and error susceptibility. To address these limitations, this study proposes a framework for risk-informed human failure event identification and interface-induced risk assessment driven by AutoGraph (InSight-R). By linking empirical behavioral data to the interface-embedded knowledge graph (IE-KG) constructed by the automated graph-based execution framework (AutoGraph), the InSight-R framework enables automated HFE identification based on both error-prone and time-deviated operational paths. Furthermore, we discuss the relationship between designer-user conflicts and human error. The results demonstrate that InSight-R not only enhances the objectivity and interpretability of HFE identification but also provides a scalable pathway toward dynamic, real-time human reliability assessment in digitalized control environments. This framework offers actionable insights for interface design optimization and contributes to the advancement of mechanism-driven HRA methodologies.
Related papers
- Enhancing Uncertainty Quantification for Runtime Safety Assurance Using Causal Risk Analysis and Operational Design Domain [0.0]
We propose an enhancement of traditional uncertainty quantification by explicitly incorporating environmental conditions.<n>We leverage Hazard Analysis and Risk Assessment (HARA) and fault tree modeling to identify critical operational conditions affecting system functionality.<n>At runtime, this BN is instantiated using real-time environmental observations to infer a probabilistic distribution over the safety estimation.
arXiv Detail & Related papers (2025-07-04T12:12:32Z) - A Cognitive-Mechanistic Human Reliability Analysis Framework: A Nuclear Power Plant Case Study [7.583754429526051]
This study proposes a cognitive-mechanistic framework (COGMIF) that enhances the IDHEAS-ECA methodology.<n>It integrates an ACT-R-based human digital twin (HDT) with TimeGAN-augmented simulation.<n>TimeGAN is trained on ACT-R-generated time-series data to produce high-fidelity synthetic operator behavior datasets.
arXiv Detail & Related papers (2025-04-25T00:46:00Z) - Context-Awareness and Interpretability of Rare Occurrences for Discovery and Formalization of Critical Failure Modes [3.140125449151061]
Vision systems are increasingly deployed in critical domains such as surveillance, law enforcement, and transportation.<n>To address these challenges, we introduce Context-Awareness and Interpretability of Rare Occurrences (CAIRO)<n>CAIRO incentivizes human-in-the-loop for testing and evaluation of criticality that arises from misdetections, adversarial attacks, and hallucinations in AI black-box models.
arXiv Detail & Related papers (2025-04-18T17:12:37Z) - Interactive Agents to Overcome Ambiguity in Software Engineering [61.40183840499932]
AI agents are increasingly being deployed to automate tasks, often based on ambiguous and underspecified user instructions.<n>Making unwarranted assumptions and failing to ask clarifying questions can lead to suboptimal outcomes.<n>We study the ability of LLM agents to handle ambiguous instructions in interactive code generation settings by evaluating proprietary and open-weight models on their performance.
arXiv Detail & Related papers (2025-02-18T17:12:26Z) - Fragility-aware Classification for Understanding Risk and Improving Generalization [6.926253982569273]
We introduce the Fragility Index (FI), a novel metric that evaluates classification performance from a risk-averse perspective.<n>We derive exact reformulations for cross-entropy loss, hinge-type loss, and Lipschitz loss, and extend the approach to deep learning models.
arXiv Detail & Related papers (2025-02-18T16:44:03Z) - Understanding Human Activity with Uncertainty Measure for Novelty in Graph Convolutional Networks [2.223052975765005]
We introduce the Temporal Fusion Graph Convolutional Network.
It aims to rectify the inadequate boundary estimation of individual actions within an activity stream.
It also mitigates the issue of over-segmentation in the temporal dimension.
arXiv Detail & Related papers (2024-10-10T13:44:18Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation [57.351098530477124]
We consider one critical type of robustness against spurious correlation, where different portions of the state do not have correlations induced by unobserved confounders.
A model that learns such useless or even harmful correlation could catastrophically fail when the confounder in the test case deviates from the training one.
Existing robust algorithms that assume simple and unstructured uncertainty sets are therefore inadequate to address this challenge.
arXiv Detail & Related papers (2023-07-15T23:53:37Z) - Towards Assessing and Characterizing the Semantic Robustness of Face
Recognition [55.258476405537344]
Face Recognition Models (FRMs) based on Deep Neural Networks (DNNs) inherit this vulnerability.
We propose a methodology for assessing and characterizing the robustness of FRMs against semantic perturbations to their input.
arXiv Detail & Related papers (2022-02-10T12:22:09Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.