Global Convolutional Neural Processes
- URL: http://arxiv.org/abs/2109.00691v1
- Date: Thu, 2 Sep 2021 03:32:50 GMT
- Title: Global Convolutional Neural Processes
- Authors: Xuesong Wang, Lina Yao, Xianzhi Wang, Hye-young Paik, and Sen Wang
- Abstract summary: We build a member GloBal Convolutional Neural Process(GBCoNP) that achieves the SOTA log-likelihood in latent NPFs.
It designs a global uncertainty representation p(z) which is an aggregation on a discretized input space.
The learnt prior is analyzed on a variety of scenarios, including 1D, 2D, and a newly proposed spatial-temporal COVID dataset.
- Score: 52.85558458799716
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The ability to deal with uncertainty in machine learning models has become
equally, if not more, crucial to their predictive ability itself. For instance,
during the pandemic, governmental policies and personal decisions are
constantly made around uncertainties. Targeting this, Neural Process Families
(NPFs) have recently shone a light on prediction with uncertainties by bridging
Gaussian processes and neural networks. Latent neural process, a member of NPF,
is believed to be capable of modelling the uncertainty on certain points (local
uncertainty) as well as the general function priors (global uncertainties).
Nonetheless, some critical questions remain unresolved, such as a formal
definition of global uncertainties, the causality behind global uncertainties,
and the manipulation of global uncertainties for generative models. Regarding
this, we build a member GloBal Convolutional Neural Process(GBCoNP) that
achieves the SOTA log-likelihood in latent NPFs. It designs a global
uncertainty representation p(z), which is an aggregation on a discretized input
space. The causal effect between the degree of global uncertainty and the
intra-task diversity is discussed. The learnt prior is analyzed on a variety of
scenarios, including 1D, 2D, and a newly proposed spatial-temporal COVID
dataset. Our manipulation of the global uncertainty not only achieves
generating the desired samples to tackle few-shot learning, but also enables
the probability evaluation on the functional priors.
Related papers
- Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Uncertainty in Natural Language Processing: Sources, Quantification, and
Applications [56.130945359053776]
We provide a comprehensive review of uncertainty-relevant works in the NLP field.
We first categorize the sources of uncertainty in natural language into three types, including input, system, and output.
We discuss the challenges of uncertainty estimation in NLP and discuss potential future directions.
arXiv Detail & Related papers (2023-06-05T06:46:53Z) - Uncertainty Propagation in Node Classification [9.03984964980373]
We focus on measuring uncertainty of graph neural networks (GNNs) for the task of node classification.
We propose a Bayesian uncertainty propagation (BUP) method, which embeds GNNs in a Bayesian modeling framework.
We present an uncertainty oriented loss for node classification that allows the GNNs to clearly integrate predictive uncertainty in learning procedure.
arXiv Detail & Related papers (2023-04-03T12:18:23Z) - A Benchmark on Uncertainty Quantification for Deep Learning Prognostics [0.0]
We assess some of the latest developments in the field of uncertainty quantification for prognostics deep learning.
This includes the state-of-the-art variational inference algorithms for Bayesian neural networks (BNN) as well as popular alternatives such as Monte Carlo Dropout (MCD), deep ensembles (DE) and heteroscedastic neural networks (HNN)
The performance of the methods is evaluated on a subset of the large NASA NCMAPSS dataset for aircraft engines.
arXiv Detail & Related papers (2023-02-09T16:12:47Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - A General Framework for quantifying Aleatoric and Epistemic uncertainty
in Graph Neural Networks [0.29494468099506893]
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning.
We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty.
We propose a unified approach to treat both sources of uncertainty in a Bayesian framework.
arXiv Detail & Related papers (2022-05-20T05:25:40Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.