Adaptive Conditional Quantile Neural Processes
- URL: http://arxiv.org/abs/2305.18777v3
- Date: Mon, 3 Jul 2023 07:26:57 GMT
- Title: Adaptive Conditional Quantile Neural Processes
- Authors: Peiman Mohseni, Nick Duffield, Bani Mallick, Arman Hasanzadeh
- Abstract summary: Conditional Quantile Neural Processes (CQNPs) are a new member of the neural processes family.
We introduce an extension of quantile regression where the model learns to focus on estimating informative quantiles.
Experiments with real and synthetic datasets demonstrate substantial improvements in predictive performance.
- Score: 9.066817971329899
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural processes are a family of probabilistic models that inherit the
flexibility of neural networks to parameterize stochastic processes. Despite
providing well-calibrated predictions, especially in regression problems, and
quick adaptation to new tasks, the Gaussian assumption that is commonly used to
represent the predictive likelihood fails to capture more complicated
distributions such as multimodal ones. To overcome this limitation, we propose
Conditional Quantile Neural Processes (CQNPs), a new member of the neural
processes family, which exploits the attractive properties of quantile
regression in modeling the distributions irrespective of their form. By
introducing an extension of quantile regression where the model learns to focus
on estimating informative quantiles, we show that the sampling efficiency and
prediction accuracy can be further enhanced. Our experiments with real and
synthetic datasets demonstrate substantial improvements in predictive
performance compared to the baselines, and better modeling of heterogeneous
distributions' characteristics such as multimodality.
Related papers
- Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Neural Spline Search for Quantile Probabilistic Modeling [35.914279831992964]
We propose a non-parametric and data-driven approach, Neural Spline Search (NSS), to represent the observed data distribution without parametric assumptions.
We demonstrate that NSS outperforms previous methods on synthetic, real-world regression and time-series forecasting tasks.
arXiv Detail & Related papers (2023-01-12T07:45:28Z) - Correcting Model Bias with Sparse Implicit Processes [0.9187159782788579]
We show that Sparse Implicit Processes (SIP) is capable of correcting model bias when the data generating mechanism differs strongly from the one implied by the model.
We use synthetic datasets to show that SIP is capable of providing predictive distributions that reflect the data better than the exact predictions of the initial, but wrongly assumed model.
arXiv Detail & Related papers (2022-07-21T18:00:01Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Autoregressive Quantile Flows for Predictive Uncertainty Estimation [7.184701179854522]
We propose Autoregressive Quantile Flows, a flexible class of probabilistic models over high-dimensional variables.
These models are instances of autoregressive flows trained using a novel objective based on proper scoring rules.
arXiv Detail & Related papers (2021-12-09T01:11:26Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Doubly Stochastic Variational Inference for Neural Processes with
Hierarchical Latent Variables [37.43541345780632]
We present a new variant of Neural Process (NP) model that we call Doubly Variational Neural Process (DSVNP)
This model combines the global latent variable and local latent variables for prediction. We evaluate this model in several experiments, and our results demonstrate competitive prediction performance in multi-output regression and uncertainty estimation in classification.
arXiv Detail & Related papers (2020-08-21T13:32:12Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.