Quantitative Predictive Monitoring and Control for Safe Human-Machine Interaction
- URL: http://arxiv.org/abs/2412.13365v1
- Date: Tue, 17 Dec 2024 22:46:39 GMT
- Title: Quantitative Predictive Monitoring and Control for Safe Human-Machine Interaction
- Authors: Shuyang Dong, Meiyi Ma, Josephine Lamp, Sebastian Elbaum, Matthew B. Dwyer, Lu Feng,
- Abstract summary: unsafe human-machine interaction can lead to catastrophic failures.
We propose a novel approach that predicts future states by accounting for the uncertainty of human interaction.
We apply the proposed approach to two case studies: Type 1 Diabetes management and semi-autonomous driving.
- Score: 16.465501381705774
- License:
- Abstract: There is a growing trend toward AI systems interacting with humans to revolutionize a range of application domains such as healthcare and transportation. However, unsafe human-machine interaction can lead to catastrophic failures. We propose a novel approach that predicts future states by accounting for the uncertainty of human interaction, monitors whether predictions satisfy or violate safety requirements, and adapts control actions based on the predictive monitoring results. Specifically, we develop a new quantitative predictive monitor based on Signal Temporal Logic with Uncertainty (STL-U) to compute a robustness degree interval, which indicates the extent to which a sequence of uncertain predictions satisfies or violates an STL-U requirement. We also develop a new loss function to guide the uncertainty calibration of Bayesian deep learning and a new adaptive control method, both of which leverage STL-U quantitative predictive monitoring results. We apply the proposed approach to two case studies: Type 1 Diabetes management and semi-autonomous driving. Experiments show that the proposed approach improves safety and effectiveness in both case studies.
Related papers
- Can we Defend Against the Unknown? An Empirical Study About Threshold Selection for Neural Network Monitoring [6.8734954619801885]
runtime monitoring becomes essential to reject unsafe predictions during inference.
Various techniques have emerged to establish rejection scores that maximize the separability between the distributions of safe and unsafe predictions.
In real-world applications, an effective monitor also requires identifying a good threshold to transform these scores into meaningful binary decisions.
arXiv Detail & Related papers (2024-05-14T14:32:58Z) - Learning-Based Approaches to Predictive Monitoring with Conformal
Statistical Guarantees [2.1684857243537334]
This tutorial focuses on efficient methods to predictive monitoring (PM)
PM is the problem of detecting future violations of a given requirement from the current state of a system.
We present a general and comprehensive framework summarizing our approach to the predictive monitoring of CPSs.
arXiv Detail & Related papers (2023-12-04T15:16:42Z) - Toward Reliable Human Pose Forecasting with Uncertainty [51.628234388046195]
We develop an open-source library for human pose forecasting, including multiple models, supporting several datasets.
We devise two types of uncertainty in the problem to increase performance and convey better trust.
arXiv Detail & Related papers (2023-04-13T17:56:08Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - Monitoring machine learning (ML)-based risk prediction algorithms in the
presence of confounding medical interventions [4.893345190925178]
Performance monitoring of machine learning (ML)-based risk prediction models in healthcare is complicated by the issue of confounding medical interventions (CMI)
A simple approach is to ignore CMI and monitor only the untreated patients, whose outcomes remain unaltered.
We show that valid inference is still possible if one monitors conditional performance and if either conditional exchangeability or time-constant selection bias hold.
arXiv Detail & Related papers (2022-11-17T18:54:34Z) - Neural Predictive Monitoring under Partial Observability [4.1316328854247155]
We present a learning-based method for predictive monitoring (PM) that produces accurate and reliable reachability predictions despite partial observability (PO)
Our method results in highly accurate reachability predictions and error detection, as well as tight prediction regions with guaranteed coverage.
arXiv Detail & Related papers (2021-08-16T15:08:20Z) - Predictive Monitoring with Logic-Calibrated Uncertainty for
Cyber-Physical Systems [2.3131309703965135]
We develop a novel approach for monitoring sequential predictions generated from Bayesian Recurrent Neural Networks (RNNs)
We propose a new logic named emphSignal Temporal Logic with Uncertainty (STL-U) to monitor a flowpipe containing an infinite set of uncertain sequences.
We evaluate the proposed approach via experiments with real-world datasets and a simulated smart city case study.
arXiv Detail & Related papers (2020-10-31T23:18:15Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Risk-Sensitive Sequential Action Control with Multi-Modal Human
Trajectory Forecasting for Safe Crowd-Robot Interaction [55.569050872780224]
We present an online framework for safe crowd-robot interaction based on risk-sensitive optimal control, wherein the risk is modeled by the entropic risk measure.
Our modular approach decouples the crowd-robot interaction into learning-based prediction and model-based control.
A simulation study and a real-world experiment show that the proposed framework can accomplish safe and efficient navigation while avoiding collisions with more than 50 humans in the scene.
arXiv Detail & Related papers (2020-09-12T02:02:52Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.