Model Monitoring and Robustness of In-Use Machine Learning Models:
Quantifying Data Distribution Shifts Using Population Stability Index
- URL: http://arxiv.org/abs/2302.00775v1
- Date: Wed, 1 Feb 2023 22:06:31 GMT
- Title: Model Monitoring and Robustness of In-Use Machine Learning Models:
Quantifying Data Distribution Shifts Using Population Stability Index
- Authors: Aria Khademi, Michael Hopka, Devesh Upadhyay
- Abstract summary: We focus on a computer vision example related to autonomous driving and aim at detecting shifts that occur as a result of adding noise to images.
We use the population stability index (PSI) as a measure of presence and intensity of shift and present results of our empirical experiments.
- Score: 2.578242050187029
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Safety goes first. Meeting and maintaining industry safety standards for
robustness of artificial intelligence (AI) and machine learning (ML) models
require continuous monitoring for faults and performance drops. Deep learning
models are widely used in industrial applications, e.g., computer vision, but
the susceptibility of their performance to environment changes (e.g., noise)
\emph{after deployment} on the product, are now well-known. A major challenge
is detecting data distribution shifts that happen, comparing the following:
{\bf (i)} development stage of AI and ML models, i.e., train/validation/test,
to {\bf (ii)} deployment stage on the product (i.e., even after `testing') in
the environment. We focus on a computer vision example related to autonomous
driving and aim at detecting shifts that occur as a result of adding noise to
images. We use the population stability index (PSI) as a measure of presence
and intensity of shift and present results of our empirical experiments showing
a promising potential for the PSI. We further discuss multiple aspects of model
monitoring and robustness that need to be analyzed \emph{simultaneously} to
achieve robustness for industry safety standards. We propose the need for and
the research direction toward \emph{categorizations} of problem classes and
examples where monitoring for robustness is required and present challenges and
pointers for future work from a \emph{practical} perspective.
Related papers
- Real-Time Anomaly Detection and Reactive Planning with Large Language Models [18.57162998677491]
Foundation models, e.g., large language models (LLMs), trained on internet-scale data possess zero-shot capabilities.
We present a two-stage reasoning framework that incorporates the judgement regarding potential anomalies into a safe control framework.
This enables our monitor to improve the trustworthiness of dynamic robotic systems, such as quadrotors or autonomous vehicles.
arXiv Detail & Related papers (2024-07-11T17:59:22Z) - Zero-shot Safety Prediction for Autonomous Robots with Foundation World Models [0.12499537119440243]
A world model creates a surrogate world to train a controller and predict safety violations by learning the internal dynamic model of systems.
We propose foundation world models that embed observations into meaningful and causally latent representations.
This enables the surrogate dynamics to directly predict causal future states by leveraging a training-free large language model.
arXiv Detail & Related papers (2024-03-30T20:03:49Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Predictive Experience Replay for Continual Visual Control and
Forecasting [62.06183102362871]
We present a new continual learning approach for visual dynamics modeling and explore its efficacy in visual control and forecasting.
We first propose the mixture world model that learns task-specific dynamics priors with a mixture of Gaussians, and then introduce a new training strategy to overcome catastrophic forgetting.
Our model remarkably outperforms the naive combinations of existing continual learning and visual RL algorithms on DeepMind Control and Meta-World benchmarks with continual visual control tasks.
arXiv Detail & Related papers (2023-03-12T05:08:03Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - A monitoring framework for deployed machine learning models with supply
chain examples [2.904613270228912]
We describe a framework for monitoring machine learning models; and, (2) its implementation for a big data supply chain application.
We use our implementation to study drift in model features, predictions, and performance on three real data sets.
arXiv Detail & Related papers (2022-11-11T14:31:38Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - Generative Modeling Helps Weak Supervision (and Vice Versa) [87.62271390571837]
We propose a model fusing weak supervision and generative adversarial networks.
It captures discrete variables in the data alongside the weak supervision derived label estimate.
It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
arXiv Detail & Related papers (2022-03-22T20:24:21Z) - Improving Variational Autoencoder based Out-of-Distribution Detection
for Embedded Real-time Applications [2.9327503320877457]
Out-of-distribution (OD) detection is an emerging approach to address the challenge of detecting out-of-distribution in real-time.
In this paper, we show how we can robustly detect hazardous motion around autonomous driving agents.
Our methods significantly improve detection capabilities of OoD factors to unique driving scenarios, 42% better than state-of-the-art approaches.
Our model also generalized near-perfectly, 97% better than the state-of-the-art across the real-world and simulation driving data sets experimented.
arXiv Detail & Related papers (2021-07-25T07:52:53Z) - SafeAMC: Adversarial training for robust modulation recognition models [53.391095789289736]
In communication systems, there are many tasks, like modulation recognition, which rely on Deep Neural Networks (DNNs) models.
These models have been shown to be susceptible to adversarial perturbations, namely imperceptible additive noise crafted to induce misclassification.
We propose to use adversarial training, which consists of fine-tuning the model with adversarial perturbations, to increase the robustness of automatic modulation recognition models.
arXiv Detail & Related papers (2021-05-28T11:29:04Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.