Inference for Change Points in High Dimensional Mean Shift Models
- URL: http://arxiv.org/abs/2107.09150v1
- Date: Mon, 19 Jul 2021 20:56:15 GMT
- Title: Inference for Change Points in High Dimensional Mean Shift Models
- Authors: Abhishek Kaul and George Michailidis
- Abstract summary: We consider the problem of constructing confidence intervals for the locations of change points in a high-dimensional mean shift model.
We develop a locally refitted least squares estimator and obtain component-wise and simultaneous rates of estimation of the underlying change points.
The results are established under a high dimensional scaling, allowing in the presence of diverging number of change points and under subexponential errors.
- Score: 10.307668909650449
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We consider the problem of constructing confidence intervals for the
locations of change points in a high-dimensional mean shift model. To that end,
we develop a locally refitted least squares estimator and obtain component-wise
and simultaneous rates of estimation of the underlying change points. The
simultaneous rate is the sharpest available in the literature by at least a
factor of $\log p,$ while the component-wise one is optimal. These results
enable existence of limiting distributions. Component-wise distributions are
characterized under both vanishing and non-vanishing jump size regimes, while
joint distributions for any finite subset of change point estimates are
characterized under the latter regime, which also yields asymptotic
independence of these estimates. The combined results are used to construct
asymptotically valid component-wise and simultaneous confidence intervals for
the change point parameters. The results are established under a high
dimensional scaling, allowing for diminishing jump sizes, in the presence of
diverging number of change points and under subexponential errors. They are
illustrated on synthetic data and on sensor measurements from smartphones for
activity recognition.
Related papers
- Semiparametric conformal prediction [79.6147286161434]
Risk-sensitive applications require well-calibrated prediction sets over multiple, potentially correlated target variables.
We treat the scores as random vectors and aim to construct the prediction set accounting for their joint correlation structure.
We report desired coverage and competitive efficiency on a range of real-world regression problems.
arXiv Detail & Related papers (2024-11-04T14:29:02Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Deep Neural Networks for Nonparametric Interaction Models with Diverging
Dimension [6.939768185086753]
We analyze a $kth$ order nonparametric interaction model in both growing dimension scenarios ($d$ grows with $n$ but at a slower rate) and in high dimension ($d gtrsim n$)
We show that under certain standard assumptions, debiased deep neural networks achieve a minimax optimal rate both in terms of $(n, d)$.
arXiv Detail & Related papers (2023-02-12T04:19:39Z) - Density Ratio Estimation via Infinitesimal Classification [85.08255198145304]
We propose DRE-infty, a divide-and-conquer approach to reduce Density ratio estimation (DRE) to a series of easier subproblems.
Inspired by Monte Carlo methods, we smoothly interpolate between the two distributions via an infinite continuum of intermediate bridge distributions.
We show that our approach performs well on downstream tasks such as mutual information estimation and energy-based modeling on complex, high-dimensional datasets.
arXiv Detail & Related papers (2021-11-22T06:26:29Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Segmentation of high dimensional means over multi-dimensional change
points and connections to regression trees [1.0660480034605242]
This article provides a new analytically tractable and fully frequentist framework to characterize and implement regression trees.
The connection to regression trees is made by a high dimensional model with dynamic mean vectors over multi-dimensional change axes.
Results are obtained under a high dimensional scaling $slog2 p=o(T_wT_h), where $p$ is the response dimension, $s$ is a sparsity parameter, and $T_w,T_h$ are sampling periods along change axes.
arXiv Detail & Related papers (2021-05-20T20:29:48Z) - Multinomial Sampling for Hierarchical Change-Point Detection [0.0]
We propose a multinomial sampling methodology that improves the detection rate and reduces the delay.
Our experiments show results that outperform the baseline method and we also provide an example oriented to a human behavior study.
arXiv Detail & Related papers (2020-07-24T09:18:17Z) - Interpolation and Learning with Scale Dependent Kernels [91.41836461193488]
We study the learning properties of nonparametric ridge-less least squares.
We consider the common case of estimators defined by scale dependent kernels.
arXiv Detail & Related papers (2020-06-17T16:43:37Z) - Inference on the Change Point for High Dimensional Dynamic Graphical
Models [9.74000189600846]
We develop an estimator for the change point parameter for a dynamically evolving graphical model.
It retains sufficient adaptivity against plug-in estimates of the graphical model parameters.
It is illustrated on RNA-sequenced data and their changes between young and older individuals.
arXiv Detail & Related papers (2020-05-19T19:15:32Z) - Optimal Change-Point Detection with Training Sequences in the Large and
Moderate Deviations Regimes [72.68201611113673]
This paper investigates a novel offline change-point detection problem from an information-theoretic perspective.
We assume that the knowledge of the underlying pre- and post-change distributions are not known and can only be learned from the training sequences which are available.
arXiv Detail & Related papers (2020-03-13T23:39:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.