Quick Change Detection in Discrete-Time in Presence of a Covert Adversary
- URL: http://arxiv.org/abs/2601.20022v1
- Date: Tue, 27 Jan 2026 19:56:31 GMT
- Title: Quick Change Detection in Discrete-Time in Presence of a Covert Adversary
- Authors: Amir Reza Ramtin, Philippe Nain, Don Towsley,
- Abstract summary: We study the problem of covert quickest change detection in a discrete-time setting, where a sequence of observations undergoes a distributional change at an unknown time.<n>We characterize the behavior of the average detection delay (ADD) and the average time to false alarm (AT2FA) when the post-change distribution converges to the pre-change distribution as $to infty$.<n>We identify the critical scaling laws governing covert behavior and derive explicit conditions under which an adversary can maintain covertness, defined by ADD = $()$, whereas in the classical setting, ADD grows only as $
- Score: 7.58317340007754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of covert quickest change detection in a discrete-time setting, where a sequence of observations undergoes a distributional change at an unknown time. Unlike classical formulations, we consider a covert adversary who has knowledge of the detector's false alarm constraint parameter $γ$ and selects a stationary post-change distribution that depends on it, seeking to remain undetected for as long as possible. Building on the theoretical foundations of the CuSum procedure, we rigorously characterize the asymptotic behavior of the average detection delay (ADD) and the average time to false alarm (AT2FA) when the post-change distribution converges to the pre-change distribution as $γ\to \infty$. Our analysis establishes exact asymptotic expressions for these quantities, extending and refining classical results that no longer hold in this regime. We identify the critical scaling laws governing covert behavior and derive explicit conditions under which an adversary can maintain covertness, defined by ADD = $Θ(γ)$, whereas in the classical setting, ADD grows only as $\mathcal{O}(\log γ)$. In particular, for Gaussian and Exponential models under adversarial perturbations of their respective parameters, we asymptotically characterize ADD as a function of the Kullback--Leibler divergence between the pre- and post-change distributions and $γ$.
Related papers
- Exact closed-form Gaussian moments of residual layers [0.0]
We show that our method attains hundredfold improvements in KL divergence from Monte Carlo ground truth over a state-of-the-art deterministic inference method.<n>We also give an a priori error bound and a preliminary analysis of feedforward neurons, which have recently attracted general interest.
arXiv Detail & Related papers (2026-01-29T20:47:55Z) - Technical note on Sequential Test-Time Adaptation via Martingale-Driven Fisher Prompting [3.5808917363708743]
M-FISHER is a method for sequential distribution shift detection and stable adaptation in streaming data.<n>For detection, we construct an exponential martingale from non-conformity scores and apply Ville's inequality to obtain time-uniform guarantees on false alarm control.<n>For adaptation, we show that Fisher-preconditioned updates of prompt parameters implement natural gradient descent on the distributional manifold.
arXiv Detail & Related papers (2025-10-04T15:31:26Z) - Quickest Change Detection in Continuous-Time in Presence of a Covert Adversary [7.58317340007754]
We investigate the problem of covert quickest change detection in a continuous-time setting, where a Brownian motion experiences a drift change at an unknown time.<n>We show that the adversary achieves damage maximal when the drift scales as $mu(gamma) = Theta (1/sqrtgamma)$, marking a fundamental trade-off between stealth and impact in continuous-time detection systems.
arXiv Detail & Related papers (2025-09-22T13:40:53Z) - Non-asymptotic bounds for forward processes in denoising diffusions: Ornstein-Uhlenbeck is hard to beat [49.1574468325115]
This paper presents explicit non-asymptotic bounds on the forward diffusion error in total variation (TV)<n>We parametrise multi-modal data distributions in terms of the distance $R$ to their furthest modes and consider forward diffusions with additive and multiplicative noise.
arXiv Detail & Related papers (2024-08-25T10:28:31Z) - Generalized Gaussian Temporal Difference Error for Uncertainty-aware Reinforcement Learning [0.19418036471925312]
We introduce a novel framework for generalized Gaussian error modeling in deep reinforcement learning.<n>We improve the estimation and mitigation of data-dependent aleatoric uncertainty.<n> Experiments with policy gradient algorithms demonstrate significant performance gains.
arXiv Detail & Related papers (2024-08-05T08:12:25Z) - Effect of Random Learning Rate: Theoretical Analysis of SGD Dynamics in Non-Convex Optimization via Stationary Distribution [5.5165579223151795]
We consider a variant of the gradient descent (SGD) with a random learning rate.<n>We show that a distribution of a parameter updated by Poisson SGD converges to a stationary distribution under weak assumptions.
arXiv Detail & Related papers (2024-06-23T06:52:33Z) - Stochastic Approximation with Delayed Updates: Finite-Time Rates under Markovian Sampling [73.5602474095954]
We study the non-asymptotic performance of approximation schemes with delayed updates under Markovian sampling.
Our theoretical findings shed light on the finite-time effects of delays for a broad class of algorithms.
arXiv Detail & Related papers (2024-02-19T03:08:02Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - E-detectors: a nonparametric framework for sequential change detection [86.15115654324488]
We develop a fundamentally new and general framework for sequential change detection.
Our procedures come with clean, nonasymptotic bounds on the average run length.
We show how to design their mixtures in order to achieve both statistical and computational efficiency.
arXiv Detail & Related papers (2022-03-07T17:25:02Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Super fast rates in structured prediction [88.99819200562784]
We show how we can leverage the fact that discrete problems are essentially predicting a discrete output when continuous problems are predicting a continuous value.
We first illustrate it for predictors based on nearest neighbors, generalizing rates known for binary classification to any discrete problem within the framework of structured prediction.
We then consider kernel ridge regression where we improve known rates in $n-1/4$ to arbitrarily fast rates, depending on a parameter characterizing the hardness of the problem.
arXiv Detail & Related papers (2021-02-01T10:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.