Quantum-Aided Meta-Learning for Bayesian Binary Neural Networks via Born
Machines
- URL: http://arxiv.org/abs/2203.17089v1
- Date: Thu, 31 Mar 2022 15:09:04 GMT
- Title: Quantum-Aided Meta-Learning for Bayesian Binary Neural Networks via Born
Machines
- Authors: Ivana Nikoloska and Osvaldo Simeone
- Abstract summary: This paper studies the use of Born machines for the problem of training binary Bayesian neural networks.
A Born machine is used to model the variational distribution of the binary weights of the neural network.
The method combines gradient-based meta-learning and variational inference via Born machines, and is shown to outperform conventional joint learning strategies.
- Score: 38.467834562966594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Near-term noisy intermediate-scale quantum circuits can efficiently implement
implicit probabilistic models in discrete spaces, supporting distributions that
are practically infeasible to sample from using classical means. One of the
possible applications of such models, also known as Born machines, is
probabilistic inference, which is at the core of Bayesian methods. This paper
studies the use of Born machines for the problem of training binary Bayesian
neural networks. In the proposed approach, a Born machine is used to model the
variational distribution of the binary weights of the neural network, and data
from multiple tasks is used to reduce training data requirements on new tasks.
The method combines gradient-based meta-learning and variational inference via
Born machines, and is shown in a prototypical regression problem to outperform
conventional joint learning strategies.
Related papers
- Neural Lineage [56.34149480207817]
We introduce a novel task known as neural lineage detection, aiming at discovering lineage relationships between parent and child models.
For practical convenience, we introduce a learning-free approach, which integrates an approximation of the finetuning process into the neural network representation similarity metrics.
For the pursuit of accuracy, we introduce a learning-based lineage detector comprising encoders and a transformer detector.
arXiv Detail & Related papers (2024-06-17T01:11:53Z) - BEND: Bagging Deep Learning Training Based on Efficient Neural Network Diffusion [56.9358325168226]
We propose a Bagging deep learning training algorithm based on Efficient Neural network Diffusion (BEND)
Our approach is simple but effective, first using multiple trained model weights and biases as inputs to train autoencoder and latent diffusion model.
Our proposed BEND algorithm can consistently outperform the mean and median accuracies of both the original trained model and the diffused model.
arXiv Detail & Related papers (2024-03-23T08:40:38Z) - A Metalearned Neural Circuit for Nonparametric Bayesian Inference [4.767884267554628]
Most applications of machine learning to classification assume a closed set of balanced classes.
This is at odds with the real world, where class occurrence statistics often follow a long-tailed power-law distribution.
We present a method for extracting the inductive bias from a nonparametric Bayesian model and transferring it to an artificial neural network.
arXiv Detail & Related papers (2023-11-24T16:43:17Z) - Classified as unknown: A novel Bayesian neural network [0.0]
We develop a new efficient Bayesian learning algorithm for fully connected neural networks.
We generalize the algorithm for a single perceptron for binary classification in citeH to multi-layer perceptrons for multi-class classification.
arXiv Detail & Related papers (2023-01-31T04:27:09Z) - An unfolding method based on conditional Invertible Neural Networks
(cINN) using iterative training [0.0]
Generative networks like invertible neural networks(INN) enable a probabilistic unfolding.
We introduce the iterative conditional INN(IcINN) for unfolding that adjusts for deviations between simulated training samples and data.
arXiv Detail & Related papers (2022-12-16T19:00:05Z) - Fully differentiable model discovery [0.0]
We propose an approach by combining neural network based surrogates with Sparse Bayesian Learning.
Our work expands PINNs to various types of neural network architectures, and connects neural network-based surrogates to the rich field of Bayesian parameter inference.
arXiv Detail & Related papers (2021-06-09T08:11:23Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z) - The Gaussian equivalence of generative models for learning with shallow
neural networks [30.47878306277163]
We study the performance of neural networks trained on data drawn from pre-trained generative models.
We provide three strands of rigorous, analytical and numerical evidence corroborating this equivalence.
These results open a viable path to the theoretical study of machine learning models with realistic data.
arXiv Detail & Related papers (2020-06-25T21:20:09Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.