Negativity as a resource for memory reduction in stochastic process modeling
- URL: http://arxiv.org/abs/2406.17292v1
- Date: Tue, 25 Jun 2024 05:42:15 GMT
- Title: Negativity as a resource for memory reduction in stochastic process modeling
- Authors: Kelvin Onggadinata, Andrew Tanggara, Mile Gu, Dagomir Kaszlikowski,
- Abstract summary: We consider a hypothetical generalization of hidden Markov models that allow for negative quasi-probabilities.
We show that under the collision entropy measure of information, the minimal memory of such models can equalize the excess entropy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In stochastic modeling, the excess entropy -- the mutual information shared between a processes past and future -- represents the fundamental lower bound of the memory needed to simulate its dynamics. However, this bound cannot be saturated by either classical machines or their enhanced quantum counterparts. Simulating a process fundamentally requires us to store more information in the present than than what is shared between past and future. Here we consider a hypothetical generalization of hidden Markov models beyond classical and quantum models -- n-machines -- that allow for negative quasi-probabilities. We show that under the collision entropy measure of information, the minimal memory of such models can equalize the excess entropy. Our results hint negativity as a necessary resource for memory-advantaged stochastic simulation -- mirroring similar interpretations in various other quantum information tasks.
Related papers
- Quantum Latent Diffusion Models [65.16624577812436]
We propose a potential version of a quantum diffusion model that leverages the established idea of classical latent diffusion models.
This involves using a traditional autoencoder to reduce images, followed by operations with variational circuits in the latent space.
The results demonstrate an advantage in using a quantum version, as evidenced by obtaining better metrics for the images generated by the quantum version.
arXiv Detail & Related papers (2025-01-19T21:24:02Z) - Memory-minimal quantum generation of stochastic processes: spectral invariants of quantum hidden Markov models [0.0]
We identify spectral invariants of a process that can be calculated from any model that generates it.
We show that the bound is raised quadratically when we restrict to classical operations.
We demonstrate that the classical bound can be violated by quantum models.
arXiv Detail & Related papers (2024-12-17T11:30:51Z) - Causal Estimation of Memorisation Profiles [58.20086589761273]
Understanding memorisation in language models has practical and societal implications.
Memorisation is the causal effect of training with an instance on the model's ability to predict that instance.
This paper proposes a new, principled, and efficient method to estimate memorisation based on the difference-in-differences design from econometrics.
arXiv Detail & Related papers (2024-06-06T17:59:09Z) - Neural Likelihood Approximation for Integer Valued Time Series Data [0.0]
We construct a neural likelihood approximation that can be trained using unconditional simulation of the underlying model.
We demonstrate our method by performing inference on a number of ecological and epidemiological models.
arXiv Detail & Related papers (2023-10-19T07:51:39Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Implementing quantum dimensionality reduction for non-Markovian
stochastic simulation [0.5269923665485903]
We implement memory-efficient quantum models for a family of non-Markovian processes using a photonic setup.
We show that with a single qubit of memory our implemented quantum models can attain higher precision than possible with any classical model of the same memory dimension.
arXiv Detail & Related papers (2022-08-26T15:54:47Z) - Stochastic Parameterizations: Better Modelling of Temporal Correlations
using Probabilistic Machine Learning [1.5293427903448025]
We show that by using a physically-informed recurrent neural network within a probabilistic framework, our model for the 96 atmospheric simulation is competitive.
This is due to a superior ability to model temporal correlations compared to standard first-order autoregressive schemes.
We evaluate across a number of metrics from the literature, but also discuss how the probabilistic metric of likelihood may be a unifying choice for future climate models.
arXiv Detail & Related papers (2022-03-28T14:51:42Z) - MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood
Inference from Sampled Trajectories [61.3299263929289]
Simulation-based inference enables learning the parameters of a model even when its likelihood cannot be computed in practice.
One class of methods uses data simulated with different parameters to infer an amortized estimator for the likelihood-to-evidence ratio.
We show that this approach can be formulated in terms of mutual information between model parameters and simulated data.
arXiv Detail & Related papers (2021-06-03T12:59:16Z) - Quantum coarse-graining for extreme dimension reduction in modelling
stochastic temporal dynamics [0.0]
coarse-graining in quantum state space drastically reduces requisite memory dimension for modelling temporal dynamics.
In contrast to classical coarse-graining, this compression is not based on temporal resolution, and brings memory-efficient modelling within reach of present quantum technologies.
arXiv Detail & Related papers (2021-05-14T13:47:21Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.