Goodness-of-Fit Test for Mismatched Self-Exciting Processes
- URL: http://arxiv.org/abs/2006.09439v3
- Date: Fri, 12 Feb 2021 16:33:13 GMT
- Title: Goodness-of-Fit Test for Mismatched Self-Exciting Processes
- Authors: Song Wei, Shixiang Zhu, Minghe Zhang, Yao Xie
- Abstract summary: We develop a GOF test for generative models of self-exciting processes by making a new connection to this problem with the classical statistical theory of Quasi-likelihood estimator (QMLE)
We present a non-parametric self-normalizing statistic for the GOF test: the Generalized Score (GS) statistics, and explicitly capture the model misspecification when establishing the distribution of the GS statistic.
- Score: 18.892845399295254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently there have been many research efforts in developing generative
models for self-exciting point processes, partly due to their broad
applicability for real-world applications. However, rarely can we quantify how
well the generative model captures the nature or ground-truth since it is
usually unknown. The challenge typically lies in the fact that the generative
models typically provide, at most, good approximations to the ground-truth
(e.g., through the rich representative power of neural networks), but they
cannot be precisely the ground-truth. We thus cannot use the classic
goodness-of-fit (GOF) test framework to evaluate their performance. In this
paper, we develop a GOF test for generative models of self-exciting processes
by making a new connection to this problem with the classical statistical
theory of Quasi-maximum-likelihood estimator (QMLE). We present a
non-parametric self-normalizing statistic for the GOF test: the Generalized
Score (GS) statistics, and explicitly capture the model misspecification when
establishing the asymptotic distribution of the GS statistic. Numerical
simulation and real-data experiments validate our theory and demonstrate the
proposed GS test's good performance.
Related papers
- On Least Square Estimation in Softmax Gating Mixture of Experts [78.3687645289918]
We investigate the performance of the least squares estimators (LSE) under a deterministic MoE model.
We establish a condition called strong identifiability to characterize the convergence behavior of various types of expert functions.
Our findings have important practical implications for expert selection.
arXiv Detail & Related papers (2024-02-05T12:31:18Z) - Testing for Overfitting [0.0]
We discuss the overfitting problem and explain why standard and concentration results do not hold for evaluation with training data.
We introduce and argue for a hypothesis test by means of which both model performance may be evaluated using training data.
arXiv Detail & Related papers (2023-05-09T22:49:55Z) - Learning to Increase the Power of Conditional Randomization Tests [8.883733362171032]
The model-X conditional randomization test is a generic framework for conditional independence testing.
We introduce novel model-fitting schemes that are designed to explicitly improve the power of model-X tests.
arXiv Detail & Related papers (2022-07-03T12:29:25Z) - Model-agnostic out-of-distribution detection using combined statistical
tests [15.27980070479021]
We present simple methods for out-of-distribution detection using a trained generative model.
We combine a classical parametric test (Rao's score test) with the recently introduced typicality test.
Despite their simplicity and generality, these methods can be competitive with model-specific out-of-distribution detection algorithms.
arXiv Detail & Related papers (2022-03-02T13:32:09Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - Optimal regularizations for data generation with probabilistic graphical
models [0.0]
Empirically, well-chosen regularization schemes dramatically improve the quality of the inferred models.
We consider the particular case of L 2 and L 1 regularizations in the Maximum A Posteriori (MAP) inference of generative pairwise graphical models.
arXiv Detail & Related papers (2021-12-02T14:45:16Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Latent Gaussian Model Boosting [0.0]
Tree-boosting shows excellent predictive accuracy on many data sets.
We obtain increased predictive accuracy compared to existing approaches in both simulated and real-world data experiments.
arXiv Detail & Related papers (2021-05-19T07:36:30Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Improving Maximum Likelihood Training for Text Generation with Density
Ratio Estimation [51.091890311312085]
We propose a new training scheme for auto-regressive sequence generative models, which is effective and stable when operating at large sample space encountered in text generation.
Our method stably outperforms Maximum Likelihood Estimation and other state-of-the-art sequence generative models in terms of both quality and diversity.
arXiv Detail & Related papers (2020-07-12T15:31:24Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.