Enhancing Changepoint Detection: Penalty Learning through Deep Learning Techniques
- URL: http://arxiv.org/abs/2408.00856v3
- Date: Wed, 18 Sep 2024 00:39:43 GMT
- Title: Enhancing Changepoint Detection: Penalty Learning through Deep Learning Techniques
- Authors: Tung L Nguyen, Toby Dylan Hocking,
- Abstract summary: This study introduces a novel deep learning method for predicting penalty parameters.
It leads to demonstrably improved changepoint detection accuracy on large benchmark supervised labeled datasets.
- Score: 2.094821665776961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Changepoint detection, a technique for identifying significant shifts within data sequences, is crucial in various fields such as finance, genomics, medicine, etc. Dynamic programming changepoint detection algorithms are employed to identify the locations of changepoints within a sequence, which rely on a penalty parameter to regulate the number of changepoints. To estimate this penalty parameter, previous work uses simple models such as linear or tree-based models. This study introduces a novel deep learning method for predicting penalty parameters, leading to demonstrably improved changepoint detection accuracy on large benchmark supervised labeled datasets compared to previous methods.
Related papers
- Change points detection in crime-related time series: an on-line fuzzy
approach based on a shape space representation [0.0]
We propose an on-line method for detecting and querying change points in crime-related time series.
The method is able to accurately detect change points at very low computational costs.
arXiv Detail & Related papers (2023-12-18T10:49:03Z) - Unsupervised Learning of Initialization in Deep Neural Networks via
Maximum Mean Discrepancy [74.34895342081407]
We propose an unsupervised algorithm to find good initialization for input data.
We first notice that each parameter configuration in the parameter space corresponds to one particular downstream task of d-way classification.
We then conjecture that the success of learning is directly related to how diverse downstream tasks are in the vicinity of the initial parameters.
arXiv Detail & Related papers (2023-02-08T23:23:28Z) - Deep learning model solves change point detection for multiple change
types [69.77452691994712]
A change points detection aims to catch an abrupt disorder in data distribution.
We propose an approach that works in the multiple-distributions scenario.
arXiv Detail & Related papers (2022-04-15T09:44:21Z) - Learning Sinkhorn divergences for supervised change point detection [24.30834981766022]
We present a novel change point detection framework that uses true change point instances as supervision for learning a ground metric.
Our method can be used to learn a sparse metric which can be useful for both feature selection and interpretation.
arXiv Detail & Related papers (2022-02-08T17:11:40Z) - WATCH: Wasserstein Change Point Detection for High-Dimensional Time
Series Data [4.228718402877829]
Change point detection methods have the ability to discover changes in an unsupervised fashion.
We propose WATCH, a novel Wasserstein distance-based change point detection approach.
An extensive evaluation shows that WATCH is capable of accurately identifying change points and outperforming state-of-the-art methods.
arXiv Detail & Related papers (2022-01-18T16:55:29Z) - Online Changepoint Detection on a Budget [5.077509096253692]
Changepoints are abrupt variations in the underlying distribution of data.
We propose an online changepoint detection algorithm which compares favorably with offline changepoint detection algorithms.
arXiv Detail & Related papers (2022-01-11T00:20:33Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - Pretrained equivariant features improve unsupervised landmark discovery [69.02115180674885]
We formulate a two-step unsupervised approach that overcomes this challenge by first learning powerful pixel-based features.
Our method produces state-of-the-art results in several challenging landmark detection datasets.
arXiv Detail & Related papers (2021-04-07T05:42:11Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - Sequential Changepoint Detection in Neural Networks with Checkpoints [11.763229353978321]
We introduce a framework for online changepoint detection and simultaneous model learning.
It is based on detecting changepoints across time by sequentially performing generalized likelihood ratio tests.
We show improved performance compared to online Bayesian changepoint detection.
arXiv Detail & Related papers (2020-10-06T21:49:54Z) - Change Point Detection in Time Series Data using Autoencoders with a
Time-Invariant Representation [69.34035527763916]
Change point detection (CPD) aims to locate abrupt property changes in time series data.
Recent CPD methods demonstrated the potential of using deep learning techniques, but often lack the ability to identify more subtle changes in the autocorrelation statistics of the signal.
We employ an autoencoder-based methodology with a novel loss function, through which the used autoencoders learn a partially time-invariant representation that is tailored for CPD.
arXiv Detail & Related papers (2020-08-21T15:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.