Bayesian Neural Networks: A Min-Max Game Framework
- URL: http://arxiv.org/abs/2311.11126v2
- Date: Wed, 29 May 2024 08:43:20 GMT
- Title: Bayesian Neural Networks: A Min-Max Game Framework
- Authors: Junping Hong, Ercan Engin Kuruoglu,
- Abstract summary: We formulate the BNN via game theory between the deterministic neural network $f$ and the sampling network $f + xi$ or $f + r*xi$.
Compared with previous BNN, BNN via game theory learns a solution space within a certain gap between the center $f$ and the sampling point $f + r*xi$.
The minimum points between $f$ and $f + r*xi$ become stable when the subspace dimension is large enough with a well-trained model $f$.
- Score: 1.8032347672439046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper is a preliminary study of the robustness and noise analysis of deep neural networks via a game theory formulation Bayesian Neural Networks (BNN) and the maximal coding rate distortion loss. BNN has been shown to provide some robustness to deep learning, and the minimax method used to be a natural conservative way to assist the Bayesian method. Inspired by the recent closed-loop transcription neural network, we formulate the BNN via game theory between the deterministic neural network $f$ and the sampling network $f + \xi$ or $f + r*\xi$. Compared with previous BNN, BNN via game theory learns a solution space within a certain gap between the center $f$ and the sampling point $f + r*\xi$, and is a conservative choice with a meaningful prior setting compared with previous BNN. Furthermore, the minimum points between $f$ and $f + r*\xi$ become stable when the subspace dimension is large enough with a well-trained model $f$. With these, the model $f$ can have a high chance of recognizing the out-of-distribution data or noise data in the subspace rather than the prediction level, even if $f$ is in online training after a few iterations of true data. So far, our experiments are limited to MNIST and Fashion MNIST data sets, more experiments with realistic data sets and complicated neural network models should be implemented to validate the above arguments.
Related papers
- Computable Lipschitz Bounds for Deep Neural Networks [0.0]
We analyse three existing upper bounds written for the $l2$ norm.
We propose two novel bounds for both feed-forward fully-connected neural networks and convolutional neural networks.
arXiv Detail & Related papers (2024-10-28T14:09:46Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Masked Bayesian Neural Networks : Theoretical Guarantee and its
Posterior Inference [1.2722697496405464]
We propose a new node-sparse BNN model which has good theoretical properties and is computationally feasible.
We prove that the posterior concentration rate to the true model is near minimax optimal and adaptive to the smoothness of the true model.
In addition, we develop a novel MCMC algorithm which makes the Bayesian inference of the node-sparse BNN model feasible in practice.
arXiv Detail & Related papers (2023-05-24T06:16:11Z) - Coin Flipping Neural Networks [8.009932864430901]
We show that neural networks with access to randomness can outperform deterministic networks by using amplification.
We conjecture that for most classification problems, there is a CFNN which solves them with higher accuracy or fewer neurons than any deterministic network.
arXiv Detail & Related papers (2022-06-18T11:19:44Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - EE-Net: Exploitation-Exploration Neural Networks in Contextual Bandits [52.98326168071513]
"EE-Net" is a neural-based bandit approach with a novel exploration strategy.
We show that EE-Net achieves $mathcalO(sqrtTlog T)$ regret, which is tighter than existing state-of-the-art neural bandit algorithms.
arXiv Detail & Related papers (2021-10-07T04:12:36Z) - A Law of Robustness for Weight-bounded Neural Networks [37.54604146791085]
Recently, (Bubeck et al., 2020) conjectured that when using two-layer networks with $k$ neurons to fit a generic dataset, the smallest Lipschitz constant is $Omega(sqrtfracnk)$.
In this work we derive a lower bound on the Lipschitz constant for any arbitrary model class with bounded Rademacher complexity.
Our result coincides with that conjectured in (Bubeck et al., 2020) for two-layer networks under the assumption of bounded weights.
arXiv Detail & Related papers (2021-02-16T11:28:59Z) - Towards Understanding Hierarchical Learning: Benefits of Neural
Representations [160.33479656108926]
In this work, we demonstrate that intermediate neural representations add more flexibility to neural networks.
We show that neural representation can achieve improved sample complexities compared with the raw input.
Our results characterize when neural representations are beneficial, and may provide a new perspective on why depth is important in deep learning.
arXiv Detail & Related papers (2020-06-24T02:44:54Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.