Predictive change point detection for heterogeneous data
- URL: http://arxiv.org/abs/2305.06630v3
- Date: Fri, 3 May 2024 07:02:09 GMT
- Title: Predictive change point detection for heterogeneous data
- Authors: Anna-Christina Glock, Florian Sobieczky, Johannes Fürnkranz, Peter Filzmoser, Martin Jech,
- Abstract summary: "Predict and Compare" is a change point detection framework assisted by a predictive machine learning model.
It outperforms online CPD routines in terms of false positive rate and out-of-control average run length.
The power of the method is demonstrated in a tribological case study.
- Score: 1.1720726814454114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A change point detection (CPD) framework assisted by a predictive machine learning model called "Predict and Compare" is introduced and characterised in relation to other state-of-the-art online CPD routines which it outperforms in terms of false positive rate and out-of-control average run length. The method's focus is on improving standard methods from sequential analysis such as the CUSUM rule in terms of these quality measures. This is achieved by replacing typically used trend estimation functionals such as the running mean with more sophisticated predictive models (Predict step), and comparing their prognosis with actual data (Compare step). The two models used in the Predict step are the ARIMA model and the LSTM recursive neural network. However, the framework is formulated in general terms, so as to allow the use of other prediction or comparison methods than those tested here. The power of the method is demonstrated in a tribological case study in which change points separating the run-in, steady-state, and divergent wear phases are detected in the regime of very few false positives.
Related papers
- Automated Assessment of Residual Plots with Computer Vision Models [5.835976576278297]
Plotting residuals is a recommended procedure to diagnose deviations from linear model assumptions.
The presence of structure in residual plots can be tested using the lineup protocol to do visual inference.
This work presents a solution by providing a computer vision model to automate the assessment of residual plots.
arXiv Detail & Related papers (2024-11-01T19:51:44Z) - Advanced POD-Based Performance Evaluation of Classifiers Applied to Human Driver Lane Changing Prediction [2.8084422332394428]
This paper uses a modified probability of detection approach to assess the reliability of machine learning algorithms.
It provides an averaging conservative behavior with the advantage of enhancing the reliability of the hit/miss approach to POD.
arXiv Detail & Related papers (2024-08-28T11:39:24Z) - Calibration of Time-Series Forecasting: Detecting and Adapting Context-Driven Distribution Shift [28.73747033245012]
We introduce a universal calibration methodology for the detection and adaptation of context-driven distribution shifts.
A novel CDS detector, termed the "residual-based CDS detector" or "Reconditionor", quantifies the model's vulnerability to CDS.
A high Reconditionor score indicates a severe susceptibility, thereby necessitating model adaptation.
arXiv Detail & Related papers (2023-10-23T11:58:01Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Predicting Ordinary Differential Equations with Transformers [65.07437364102931]
We develop a transformer-based sequence-to-sequence model that recovers scalar ordinary differential equations (ODEs) in symbolic form from irregularly sampled and noisy observations of a single solution trajectory.
Our method is efficiently scalable: after one-time pretraining on a large set of ODEs, we can infer the governing law of a new observed solution in a few forward passes of the model.
arXiv Detail & Related papers (2023-07-24T08:46:12Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Training on Test Data with Bayesian Adaptation for Covariate Shift [96.3250517412545]
Deep neural networks often make inaccurate predictions with unreliable uncertainty estimates.
We derive a Bayesian model that provides for a well-defined relationship between unlabeled inputs under distributional shift and model parameters.
We show that our method improves both accuracy and uncertainty estimation.
arXiv Detail & Related papers (2021-09-27T01:09:08Z) - Scalable Cross Validation Losses for Gaussian Process Models [22.204619587725208]
We use Polya-Gamma auxiliary variables and variational inference to accommodate binary and multi-class classification.
We find that our method offers fast training and excellent predictive performance.
arXiv Detail & Related papers (2021-05-24T21:01:47Z) - Robust Correction of Sampling Bias Using Cumulative Distribution
Functions [19.551668880584973]
Varying domains and biased datasets can lead to differences between the training and the target distributions.
Current approaches for alleviating this often rely on estimating the ratio of training and target probability density functions.
arXiv Detail & Related papers (2020-10-23T22:13:00Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.