A Constant-per-Iteration Likelihood Ratio Test for Online Changepoint
Detection for Exponential Family Models
- URL: http://arxiv.org/abs/2302.04743v1
- Date: Thu, 9 Feb 2023 16:24:12 GMT
- Title: A Constant-per-Iteration Likelihood Ratio Test for Online Changepoint
Detection for Exponential Family Models
- Authors: Kes Ward, Gaetano Romano, Idris Eckley, Paul Fearnhead
- Abstract summary: FOCuS is introduced for detecting changes in mean in Gaussian data that decreases the per-iteration cost to $O(log T)$.
We show how we can adaptively perform the maximisation step of the algorithm so that we need only maximise the test statistic over a small subset of these possible locations.
- Score: 1.376408511310322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online changepoint detection algorithms that are based on likelihood-ratio
tests have been shown to have excellent statistical properties. However, a
simple online implementation is computationally infeasible as, at time $T$, it
involves considering $O(T)$ possible locations for the change. Recently, the
FOCuS algorithm has been introduced for detecting changes in mean in Gaussian
data that decreases the per-iteration cost to $O(\log T)$. This is possible by
using pruning ideas, which reduce the set of changepoint locations that need to
be considered at time $T$ to approximately $\log T$. We show that if one wishes
to perform the likelihood ratio test for a different one-parameter exponential
family model, then exactly the same pruning rule can be used, and again one
need only consider approximately $\log T$ locations at iteration $T$.
Furthermore, we show how we can adaptively perform the maximisation step of the
algorithm so that we need only maximise the test statistic over a small subset
of these possible locations. Empirical results show that the resulting online
algorithm, which can detect changes under a wide range of models, has a
constant-per-iteration cost on average.
Related papers
- Adaptive $k$-nearest neighbor classifier based on the local estimation of the shape operator [49.87315310656657]
We introduce a new adaptive $k$-nearest neighbours ($kK$-NN) algorithm that explores the local curvature at a sample to adaptively defining the neighborhood size.
Results on many real-world datasets indicate that the new $kK$-NN algorithm yields superior balanced accuracy compared to the established $k$-NN method.
arXiv Detail & Related papers (2024-09-08T13:08:45Z) - Online non-parametric likelihood-ratio estimation by Pearson-divergence
functional minimization [55.98760097296213]
We introduce a new framework for online non-parametric LRE (OLRE) for the setting where pairs of iid observations $(x_t sim p, x'_t sim q)$ are observed over time.
We provide theoretical guarantees for the performance of the OLRE method along with empirical validation in synthetic experiments.
arXiv Detail & Related papers (2023-11-03T13:20:11Z) - Modified Step Size for Enhanced Stochastic Gradient Descent: Convergence
and Experiments [0.0]
This paper introduces a novel approach to the performance of the gradient descent (SGD) algorithm by incorporating a modified decay step size based on $frac1sqrttt.
The proposed step size integrates a logarithmic step term, leading to the selection of smaller values in the final iteration.
To the effectiveness of our approach, we conducted numerical experiments on image classification tasks using the FashionMNIST, andARAR datasets.
arXiv Detail & Related papers (2023-09-03T19:21:59Z) - Fast likelihood-based change point detection [0.949290364986276]
We use a likelihood ratio between two models as a measure for indicating change.
Finding the optimal split can be done in $O(n)$ time, where $n$ is the number of entries since the last change point.
We propose an approximation scheme that yields $(1 - epsilon)$ approximation in $O(epsilon-1 log2 n)$ time.
arXiv Detail & Related papers (2023-01-21T05:13:35Z) - Sequential Gradient Descent and Quasi-Newton's Method for Change-Point
Analysis [0.348097307252416]
One common approach to detecting change-points is minimizing a cost function over possible numbers and locations of change-points.
This paper introduces a new sequential method (SE) that can be coupled with gradient descent (SeGD) and quasi-Newton's method (SeN) to find the cost value effectively.
arXiv Detail & Related papers (2022-10-21T20:30:26Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Exact Paired-Permutation Testing for Structured Test Statistics [67.71280539312536]
We provide an efficient exact algorithm for the paired-permutation test for a family of structured test statistics.
Our exact algorithm was $10$x faster than the Monte Carlo approximation with $20000$ samples on a common dataset.
arXiv Detail & Related papers (2022-05-03T11:00:59Z) - Optimal network online change point localisation [73.93301212629231]
We study the problem of online network change point detection.
In this setting, a collection of independent Bernoulli networks is collected sequentially, and the underlying change point occurs.
The goal is to detect the change point as quickly as possible, if it exists, subject to a constraint on the number or probability of false alarms.
arXiv Detail & Related papers (2021-01-14T07:24:39Z) - Improved Sample Complexity for Incremental Autonomous Exploration in
MDPs [132.88757893161699]
We learn the set of $epsilon$-optimal goal-conditioned policies attaining all states that are incrementally reachable within $L$ steps.
DisCo is the first algorithm that can return an $epsilon/c_min$-optimal policy for any cost-sensitive shortest-path problem.
arXiv Detail & Related papers (2020-12-29T14:06:09Z) - Optimistic search strategy: Change point detection for large-scale data
via adaptive logarithmic queries [1.3212010735248336]
Change point detection is often formulated as a search for the maximum of a gain function describing improved fits when segmenting the data.
We propose optimistic search strategies with $O(log T)$ exploiting specific structure of the gain function.
arXiv Detail & Related papers (2020-10-20T11:09:52Z) - Convergence of Online Adaptive and Recurrent Optimization Algorithms [0.0]
We prove local convergence of several notable descent algorithms used in machine learning.
We adopt an "ergodic" rather than probabilistic viewpoint, working with empirical time averages instead of probability distributions.
arXiv Detail & Related papers (2020-05-12T09:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.