Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted
Networks
- URL: http://arxiv.org/abs/2310.20447v1
- Date: Tue, 31 Oct 2023 13:30:30 GMT
- Title: Efficient Bayesian Learning Curve Extrapolation using Prior-Data Fitted
Networks
- Authors: Steven Adriaensen, Herilalaina Rakotoarison, Samuel M\"uller, Frank
Hutter
- Abstract summary: We describe the first application of prior-data fitted neural networks (PFNs) in this context.
We demonstrate that LC-PFN can approximate the posterior predictive distribution more accurately than MCMC.
We also show that the same LC-PFN achieves competitive performance extrapolating a total of 20 000 real learning curves.
- Score: 44.294078238444996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning curve extrapolation aims to predict model performance in later
epochs of training, based on the performance in earlier epochs. In this work,
we argue that, while the inherent uncertainty in the extrapolation of learning
curves warrants a Bayesian approach, existing methods are (i) overly
restrictive, and/or (ii) computationally expensive. We describe the first
application of prior-data fitted neural networks (PFNs) in this context. A PFN
is a transformer, pre-trained on data generated from a prior, to perform
approximate Bayesian inference in a single forward pass. We propose LC-PFN, a
PFN trained to extrapolate 10 million artificial right-censored learning curves
generated from a parametric prior proposed in prior art using MCMC. We
demonstrate that LC-PFN can approximate the posterior predictive distribution
more accurately than MCMC, while being over 10 000 times faster. We also show
that the same LC-PFN achieves competitive performance extrapolating a total of
20 000 real learning curves from four learning curve benchmarks (LCBench,
NAS-Bench-201, Taskset, and PD1) that stem from training a wide range of model
architectures (MLPs, CNNs, RNNs, and Transformers) on 53 different datasets
with varying input modalities (tabular, image, text, and protein data).
Finally, we investigate its potential in the context of model selection and
find that a simple LC-PFN based predictive early stopping criterion obtains 2 -
6x speed-ups on 45 of these datasets, at virtually no overhead.
Related papers
- Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data [39.40116554523575]
We present Drift-Resilient TabPFN, a fresh approach based on In-Context Learning with a Prior-Data Fitted Network.
It learns to approximate Bayesian inference on synthetic datasets drawn from a prior.
It improves accuracy from 0.688 to 0.744 and ROC AUC from 0.786 to 0.832 while maintaining stronger calibration.
arXiv Detail & Related papers (2024-11-15T23:49:23Z) - Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Inferring Data Preconditions from Deep Learning Models for Trustworthy
Prediction in Deployment [25.527665632625627]
It is important to reason about the trustworthiness of the model's predictions with unseen data during deployment.
Existing methods for specifying and verifying traditional software are insufficient for this task.
We propose a novel technique that uses rules derived from neural network computations to infer data preconditions.
arXiv Detail & Related papers (2024-01-26T03:47:18Z) - A Meta-Learning Approach to Predicting Performance and Data Requirements [163.4412093478316]
We propose an approach to estimate the number of samples required for a model to reach a target performance.
We find that the power law, the de facto principle to estimate model performance, leads to large error when using a small dataset.
We introduce a novel piecewise power law (PPL) that handles the two data differently.
arXiv Detail & Related papers (2023-03-02T21:48:22Z) - Variational Linearized Laplace Approximation for Bayesian Deep Learning [11.22428369342346]
We propose a new method for approximating Linearized Laplace Approximation (LLA) using a variational sparse Gaussian Process (GP)
Our method is based on the dual RKHS formulation of GPs and retains, as the predictive mean, the output of the original DNN.
It allows for efficient optimization, which results in sub-linear training time in the size of the training dataset.
arXiv Detail & Related papers (2023-02-24T10:32:30Z) - An unfolding method based on conditional Invertible Neural Networks
(cINN) using iterative training [0.0]
Generative networks like invertible neural networks(INN) enable a probabilistic unfolding.
We introduce the iterative conditional INN(IcINN) for unfolding that adjusts for deviations between simulated training samples and data.
arXiv Detail & Related papers (2022-12-16T19:00:05Z) - Transformers Can Do Bayesian Inference [56.99390658880008]
We present Prior-Data Fitted Networks (PFNs)
PFNs leverage in-context learning in large-scale machine learning techniques to approximate a large set of posteriors.
We demonstrate that PFNs can near-perfectly mimic Gaussian processes and also enable efficient Bayesian inference for intractable problems.
arXiv Detail & Related papers (2021-12-20T13:07:39Z) - Self-Supervised Pre-Training for Transformer-Based Person
Re-Identification [54.55281692768765]
Transformer-based supervised pre-training achieves great performance in person re-identification (ReID)
Due to the domain gap between ImageNet and ReID datasets, it usually needs a larger pre-training dataset to boost the performance.
This work aims to mitigate the gap between the pre-training and ReID datasets from the perspective of data and model structure.
arXiv Detail & Related papers (2021-11-23T18:59:08Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.