Optimal network online change point localisation
- URL: http://arxiv.org/abs/2101.05477v1
- Date: Thu, 14 Jan 2021 07:24:39 GMT
- Title: Optimal network online change point localisation
- Authors: Yi Yu, Oscar Hernan Madrid Padilla, Daren Wang and Alessandro Rinaldo
- Abstract summary: We study the problem of online network change point detection.
In this setting, a collection of independent Bernoulli networks is collected sequentially, and the underlying change point occurs.
The goal is to detect the change point as quickly as possible, if it exists, subject to a constraint on the number or probability of false alarms.
- Score: 73.93301212629231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of online network change point detection. In this
setting, a collection of independent Bernoulli networks is collected
sequentially, and the underlying distributions change when a change point
occurs. The goal is to detect the change point as quickly as possible, if it
exists, subject to a constraint on the number or probability of false alarms.
In this paper, on the detection delay, we establish a minimax lower bound and
two upper bounds based on NP-hard algorithms and polynomial-time algorithms,
i.e., \[ \mbox{detection delay} \begin{cases} \gtrsim \log(1/\alpha)
\frac{\max\{r^2/n, \, 1\}}{\kappa_0^2 n \rho},\\ \lesssim \log(\Delta/\alpha)
\frac{\max\{r^2/n, \, \log(r)\}}{\kappa_0^2 n \rho}, & \mbox{with NP-hard
algorithms},\\ \lesssim \log(\Delta/\alpha) \frac{r}{\kappa_0^2 n \rho}, &
\mbox{with polynomial-time algorithms}, \end{cases} \] where $\kappa_0, n,
\rho, r$ and $\alpha$ are the normalised jump size, network size, entrywise
sparsity, rank sparsity and the overall Type-I error upper bound. All the model
parameters are allowed to vary as $\Delta$, the location of the change point,
diverges. The polynomial-time algorithms are novel procedures that we propose
in this paper, designed for quick detection under two different forms of Type-I
error control. The first is based on controlling the overall probability of a
false alarm when there are no change points, and the second is based on
specifying a lower bound on the expected time of the first false alarm.
Extensive experiments show that, under different scenarios and the
aforementioned forms of Type-I error control, our proposed approaches
outperform state-of-the-art methods.
Related papers
- Gradient Norm Regularization Second-Order Algorithms for Solving Nonconvex-Strongly Concave Minimax Problems [2.3721580767877257]
We propose an algorithm for solving non-strongly concave minimax problems.
We show that the proposed algorithm achieves an $mathcal.
stilon.
$second-order norm is proved to be upper by.
$tk.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k.
$k
arXiv Detail & Related papers (2024-11-24T09:46:36Z) - Finite-Time Error Bounds for Greedy-GQ [20.51105692499517]
We show that Greedy-GQ algorithm converges fast as finite-time error.
Our analysis provides for faster convergence step-sizes for choosing step-sizes.
arXiv Detail & Related papers (2022-09-06T15:04:57Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Sharper Convergence Guarantees for Asynchronous SGD for Distributed and
Federated Learning [77.22019100456595]
We show a training algorithm for distributed computation workers with varying communication frequency.
In this work, we obtain a tighter convergence rate of $mathcalO!!!(sigma2-2_avg!! .
We also show that the heterogeneity term in rate is affected by the average delay within each worker.
arXiv Detail & Related papers (2022-06-16T17:10:57Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Inferring Hidden Structures in Random Graphs [13.031167737538881]
We study the two inference problems of detecting and recovering an isolated community of emphgeneral structure planted in a random graph.
We derive lower bounds for detecting/recovering the structure $Gamma_k$ in terms of the parameters $(n,k,q)$, as well as certain properties of $Gamma_k$, and exhibit computationally optimal algorithms that achieve these lower bounds.
arXiv Detail & Related papers (2021-10-05T09:39:51Z) - Distributed Saddle-Point Problems Under Similarity [173.19083235638104]
We show that a given suboptimality $epsilon0$ is achieved master/workers networks in $Omegabig.
We then propose algorithms matching the lower bounds either types of networks (up to log-overs)
We assess effectiveness of the proposed algorithms on a robust logistic regression problem.
arXiv Detail & Related papers (2021-07-22T14:25:16Z) - A Momentum-Assisted Single-Timescale Stochastic Approximation Algorithm
for Bilevel Optimization [112.59170319105971]
We propose a new algorithm -- the Momentum- Single-timescale Approximation (MSTSA) -- for tackling problems.
MSTSA allows us to control the error in iterations due to inaccurate solution to the lower level subproblem.
arXiv Detail & Related papers (2021-02-15T07:10:33Z) - Optimal $\delta$-Correct Best-Arm Selection for Heavy-Tailed
Distributions [2.2940141855172036]
We consider the problem of identifying the one with the maximum mean using a $delta$-correct algorithm.
Lower bounds for $delta$-correct algorithms are well known.
We propose a $delta$-correct algorithm that matches the lower bound as $delta$ reduces to zero.
arXiv Detail & Related papers (2019-08-24T05:31:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.