Neural Inference of Gaussian Processes for Time Series Data of Quasars
- URL: http://arxiv.org/abs/2211.10305v1
- Date: Thu, 17 Nov 2022 13:01:26 GMT
- Title: Neural Inference of Gaussian Processes for Time Series Data of Quasars
- Authors: Egor Danilov, Aleksandra \'Ciprijanovi\'c and Brian Nord
- Abstract summary: We introduce a new model that enables it to describe quasar spectra completely.
We also introduce a new method of inference of Gaussian process parameters, which we call $textitNeural Inference$.
The combination of both the CDRW model and Neural Inference significantly outperforms the baseline DRW and MLE.
- Score: 72.79083473275742
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of quasar light curves poses two problems: inference of the power
spectrum and interpolation of an irregularly sampled time series. A baseline
approach to these tasks is to interpolate a time series with a Damped Random
Walk (DRW) model, in which the spectrum is inferred using Maximum Likelihood
Estimation (MLE). However, the DRW model does not describe the smoothness of
the time series, and MLE faces many problems in terms of optimization and
numerical precision. In this work, we introduce a new stochastic model that we
call $\textit{Convolved Damped Random Walk}$ (CDRW). This model introduces a
concept of smoothness to a DRW, which enables it to describe quasar spectra
completely. We also introduce a new method of inference of Gaussian process
parameters, which we call $\textit{Neural Inference}$. This method uses the
powers of state-of-the-art neural networks to improve the conventional MLE
inference technique. In our experiments, the Neural Inference method results in
significant improvement over the baseline MLE (RMSE: $0.318 \rightarrow 0.205$,
$0.464 \rightarrow 0.444$). Moreover, the combination of both the CDRW model
and Neural Inference significantly outperforms the baseline DRW and MLE in
interpolating a typical quasar light curve ($\chi^2$: $0.333 \rightarrow
0.998$, $2.695 \rightarrow 0.981$). The code is published on GitHub.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Kernel-, mean- and noise-marginalised Gaussian processes for exoplanet
transits and $H_0$ inference [0.0]
Kernel recovery and mean function inference were explored on synthetic data from exoplanet transit light curve simulations.
The method was extended to marginalisation over mean functions and noise models.
The kernel posterior of the cosmic chronometers dataset prefers a non-stationary linear kernel.
arXiv Detail & Related papers (2023-11-07T17:31:01Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Minimax Optimal Quantization of Linear Models: Information-Theoretic
Limits and Efficient Algorithms [59.724977092582535]
We consider the problem of quantizing a linear model learned from measurements.
We derive an information-theoretic lower bound for the minimax risk under this setting.
We show that our method and upper-bounds can be extended for two-layer ReLU neural networks.
arXiv Detail & Related papers (2022-02-23T02:39:04Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Analysis of One-Hidden-Layer Neural Networks via the Resolvent Method [0.0]
Motivated by random neural networks, we consider the random matrix $M = Y Yast$ with $Y = f(WX)$.
We prove that the Stieltjes transform of the limiting spectral distribution satisfies a quartic self-consistent equation up to some error terms.
In addition, we extend the previous results to the case of additive bias $Y=f(WX+B)$ with $B$ being an independent rank-one Gaussian random matrix.
arXiv Detail & Related papers (2021-05-11T15:17:39Z) - Estimating Stochastic Linear Combination of Non-linear Regressions
Efficiently and Scalably [23.372021234032363]
We show that when the sub-sample sizes are large then the estimation errors will be sacrificed by too much.
To the best of our knowledge, this is the first work that and guarantees for the lineartext+Stochasticity model.
arXiv Detail & Related papers (2020-10-19T07:15:38Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z) - Tight Nonparametric Convergence Rates for Stochastic Gradient Descent
under the Noiseless Linear Model [0.0]
We analyze the convergence of single-pass, fixed step-size gradient descent on the least-square risk under this model.
As a special case, we analyze an online algorithm for estimating a real function on the unit interval from the noiseless observation of its value at randomly sampled points.
arXiv Detail & Related papers (2020-06-15T08:25:50Z) - Gravitational-wave parameter estimation with autoregressive neural
network flows [0.0]
We introduce the use of autoregressive normalizing flows for rapid likelihood-free inference of binary black hole system parameters from gravitational-wave data with deep neural networks.
A normalizing flow is an invertible mapping on a sample space that can be used to induce a transformation from a simple probability distribution to a more complex one.
We build a more powerful latent variable model by incorporating autoregressive flows within the variational autoencoder framework.
arXiv Detail & Related papers (2020-02-18T15:44:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.