Detecting Signs of Model Change with Continuous Model Selection Based on
Descriptive Dimensionality
- URL: http://arxiv.org/abs/2302.12127v1
- Date: Thu, 23 Feb 2023 16:10:06 GMT
- Title: Detecting Signs of Model Change with Continuous Model Selection Based on
Descriptive Dimensionality
- Authors: Kenji Yamanishi and So Hirai
- Abstract summary: We address the issue of detecting changes of models that lie behind a data stream.
We propose a novel methodology for detecting signs of model changes by tracking the rise-up of Ddim in a data stream.
- Score: 21.86268650362205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the issue of detecting changes of models that lie behind a data
stream. The model refers to an integer-valued structural information such as
the number of free parameters in a parametric model. Specifically we are
concerned with the problem of how we can detect signs of model changes earlier
than they are actualized. To this end, we employ {\em continuous model
selection} on the basis of the notion of {\em descriptive
dimensionality}~(Ddim). It is a real-valued model dimensionality, which is
designed for quantifying the model dimensionality in the model transition
period. Continuous model selection is to determine the real-valued model
dimensionality in terms of Ddim from a given data. We propose a novel
methodology for detecting signs of model changes by tracking the rise-up of
Ddim in a data stream. We apply this methodology to detecting signs of changes
of the number of clusters in a Gaussian mixture model and those of the order in
an auto regression model. With synthetic and real data sets, we empirically
demonstrate its effectiveness by showing that it is able to visualize well how
rapidly model dimensionality moves in the transition period and to raise early
warning signals of model changes earlier than they are detected with existing
methods.
Related papers
- Latent diffusion models for parameterization and data assimilation of facies-based geomodels [0.0]
Diffusion models are trained to generate new geological realizations from input fields characterized by random noise.
Latent diffusion models are shown to provide realizations that are visually consistent with samples from geomodeling software.
arXiv Detail & Related papers (2024-06-21T01:32:03Z) - Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data [49.73114504515852]
We show that replacing the original real data by each generation's synthetic data does indeed tend towards model collapse.
We demonstrate that accumulating the successive generations of synthetic data alongside the original real data avoids model collapse.
arXiv Detail & Related papers (2024-04-01T18:31:24Z) - Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly
Detection with Scale Learning [11.245813423781415]
We devise novel data-driven supervision for data by introducing a characteristic -- scale -- as data labels.
Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training.
This paper further proposes a scale learning-based anomaly detection method.
arXiv Detail & Related papers (2023-05-25T14:48:00Z) - Neural Superstatistics for Bayesian Estimation of Dynamic Cognitive
Models [2.7391842773173334]
We develop a simulation-based deep learning method for Bayesian inference, which can recover both time-varying and time-invariant parameters.
Our results show that the deep learning approach is very efficient in capturing the temporal dynamics of the model.
arXiv Detail & Related papers (2022-11-23T17:42:53Z) - Improved Modeling of Persistence Diagram [0.0]
We suggest a modification of the RST (Replicating Statistical Topology) model.
Using a simulation study, we show that the modified RST improves the performance of the RST in terms of goodness of fit.
arXiv Detail & Related papers (2022-05-22T19:11:59Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Equivalence of Segmental and Neural Transducer Modeling: A Proof of
Concept [56.46135010588918]
We prove that the widely used class of RNN-Transducer models and segmental models (direct HMM) are equivalent.
It is shown that blank probabilities translate into segment length probabilities and vice versa.
arXiv Detail & Related papers (2021-04-13T11:20:48Z) - Design of Dynamic Experiments for Black-Box Model Discrimination [72.2414939419588]
Consider a dynamic model discrimination setting where we wish to chose: (i) what is the best mechanistic, time-varying model and (ii) what are the best model parameter estimates.
For rival mechanistic models where we have access to gradient information, we extend existing methods to incorporate a wider range of problem uncertainty.
We replace these black-box models with Gaussian process surrogate models and thereby extend the model discrimination setting to additionally incorporate rival black-box model.
arXiv Detail & Related papers (2021-02-07T11:34:39Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Data from Model: Extracting Data from Non-robust and Robust Models [83.60161052867534]
This work explores the reverse process of generating data from a model, attempting to reveal the relationship between the data and the model.
We repeat the process of Data to Model (DtM) and Data from Model (DfM) in sequence and explore the loss of feature mapping information.
Our results show that the accuracy drop is limited even after multiple sequences of DtM and DfM, especially for robust models.
arXiv Detail & Related papers (2020-07-13T05:27:48Z) - Predicting Multidimensional Data via Tensor Learning [0.0]
We develop a model that retains the intrinsic multidimensional structure of the dataset.
To estimate the model parameters, an Alternating Least Squares algorithm is developed.
The proposed model is able to outperform benchmark models present in the forecasting literature.
arXiv Detail & Related papers (2020-02-11T11:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.