Fast, Distribution-free Predictive Inference for Neural Networks with
Coverage Guarantees
- URL: http://arxiv.org/abs/2306.06582v1
- Date: Sun, 11 Jun 2023 04:03:58 GMT
- Title: Fast, Distribution-free Predictive Inference for Neural Networks with
Coverage Guarantees
- Authors: Yue Gao, Garvesh Raskutti, Rebecca Willet
- Abstract summary: This paper introduces a novel, computationally-efficient algorithm for predictive inference (PI)
It requires no distributional assumptions on the data and can be computed faster than existing bootstrap-type methods for neural networks.
- Score: 25.798057062452443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel, computationally-efficient algorithm for
predictive inference (PI) that requires no distributional assumptions on the
data and can be computed faster than existing bootstrap-type methods for neural
networks. Specifically, if there are $n$ training samples, bootstrap methods
require training a model on each of the $n$ subsamples of size $n-1$; for large
models like neural networks, this process can be computationally prohibitive.
In contrast, our proposed method trains one neural network on the full dataset
with $(\epsilon, \delta)$-differential privacy (DP) and then approximates each
leave-one-out model efficiently using a linear approximation around the
differentially-private neural network estimate. With exchangeable data, we
prove that our approach has a rigorous coverage guarantee that depends on the
preset privacy parameters and the stability of the neural network, regardless
of the data distribution. Simulations and experiments on real data demonstrate
that our method satisfies the coverage guarantees with substantially reduced
computation compared to bootstrap methods.
Related papers
- Neural-g: A Deep Learning Framework for Mixing Density Estimation [16.464806944964003]
Mixing (or prior) density estimation is an important problem in machine learning and statistics.
We propose neural-$g$, a new neural network-based estimator for $g$-modeling.
arXiv Detail & Related papers (2024-06-10T03:00:28Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - An unfolding method based on conditional Invertible Neural Networks
(cINN) using iterative training [0.0]
Generative networks like invertible neural networks(INN) enable a probabilistic unfolding.
We introduce the iterative conditional INN(IcINN) for unfolding that adjusts for deviations between simulated training samples and data.
arXiv Detail & Related papers (2022-12-16T19:00:05Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Robust and integrative Bayesian neural networks for likelihood-free
parameter inference [0.0]
State-of-the-art neural network-based methods for learning summary statistics have delivered promising results for simulation-based likelihood-free parameter inference.
This work proposes a robust integrated approach that learns summary statistics using Bayesian neural networks, and directly estimates the posterior density using categorical distributions.
arXiv Detail & Related papers (2021-02-12T13:45:23Z) - POSEIDON: Privacy-Preserving Federated Neural Network Learning [8.103262600715864]
POSEIDON is a first of its kind in the regime of privacy-preserving neural network training.
It employs multiparty lattice-based cryptography to preserve the confidentiality of the training data, the model, and the evaluation data.
It trains a 3-layer neural network on the MNIST dataset with 784 features and 60K samples distributed among 10 parties in less than 2 hours.
arXiv Detail & Related papers (2020-09-01T11:06:31Z) - Measurement error models: from nonparametric methods to deep neural
networks [3.1798318618973362]
We propose an efficient neural network design for estimating measurement error models.
We use a fully connected feed-forward neural network to approximate the regression function $f(x)$.
We conduct an extensive numerical study to compare the neural network approach with classical nonparametric methods.
arXiv Detail & Related papers (2020-07-15T06:05:37Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.