A Primer on Bayesian Neural Networks: Review and Debates
- URL: http://arxiv.org/abs/2309.16314v1
- Date: Thu, 28 Sep 2023 10:09:15 GMT
- Title: A Primer on Bayesian Neural Networks: Review and Debates
- Authors: Julyan Arbel, Konstantinos Pitas, Mariia Vladimirova, Vincent Fortuin
- Abstract summary: Bayesian neural networks (BNNs) have emerged as a compelling extension of conventional neural networks.
This primer presents a systematic introduction to the fundamental concepts of neural networks and Bayesian inference.
The target audience comprises statisticians with a potential background in Bayesian methods but lacking deep learning expertise.
- Score: 10.074350562162921
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have achieved remarkable performance across various problem
domains, but their widespread applicability is hindered by inherent limitations
such as overconfidence in predictions, lack of interpretability, and
vulnerability to adversarial attacks. To address these challenges, Bayesian
neural networks (BNNs) have emerged as a compelling extension of conventional
neural networks, integrating uncertainty estimation into their predictive
capabilities.
This comprehensive primer presents a systematic introduction to the
fundamental concepts of neural networks and Bayesian inference, elucidating
their synergistic integration for the development of BNNs. The target audience
comprises statisticians with a potential background in Bayesian methods but
lacking deep learning expertise, as well as machine learners proficient in deep
neural networks but with limited exposure to Bayesian statistics. We provide an
overview of commonly employed priors, examining their impact on model behavior
and performance. Additionally, we delve into the practical considerations
associated with training and inference in BNNs.
Furthermore, we explore advanced topics within the realm of BNN research,
acknowledging the existence of ongoing debates and controversies. By offering
insights into cutting-edge developments, this primer not only equips
researchers and practitioners with a solid foundation in BNNs, but also
illuminates the potential applications of this dynamic field. As a valuable
resource, it fosters an understanding of BNNs and their promising prospects,
facilitating further advancements in the pursuit of knowledge and innovation.
Related papers
- Attacking Bayes: On the Adversarial Robustness of Bayesian Neural Networks [10.317475068017961]
We investigate whether it is possible to successfully break state-of-the-art BNN inference methods and prediction pipelines.
We find that BNNs trained with state-of-the-art approximate inference methods, and even BNNs trained with Hamiltonian Monte Carlo, are highly susceptible to adversarial attacks.
arXiv Detail & Related papers (2024-04-27T01:34:46Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Posterior Regularized Bayesian Neural Network Incorporating Soft and
Hard Knowledge Constraints [12.050265348673078]
We propose a novel Posterior-Regularized Bayesian Neural Network (PR-BNN) model by incorporating different types of knowledge constraints.
Experiments in simulation and two case studies about aviation landing prediction and solar energy output prediction have shown the knowledge constraints and the performance improvement of the proposed model.
arXiv Detail & Related papers (2022-10-16T18:58:50Z) - Trustworthy Graph Neural Networks: Aspects, Methods and Trends [115.84291569988748]
Graph neural networks (GNNs) have emerged as competent graph learning methods for diverse real-world scenarios.
Performance-oriented GNNs have exhibited potential adverse effects like vulnerability to adversarial attacks.
To avoid these unintentional harms, it is necessary to build competent GNNs characterised by trustworthiness.
arXiv Detail & Related papers (2022-05-16T02:21:09Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - A Survey of Uncertainty in Deep Neural Networks [39.68313590688467]
It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction.
A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented.
For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations.
arXiv Detail & Related papers (2021-07-07T16:39:28Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Recent Advances in Understanding Adversarial Robustness of Deep Neural
Networks [15.217367754000913]
It is increasingly important to obtain models with high robustness that are resistant to adversarial examples.
We give preliminary definitions on what adversarial attacks and robustness are.
We study frequently-used benchmarks and mention theoretically-proved bounds for adversarial robustness.
arXiv Detail & Related papers (2020-11-03T07:42:53Z) - Incorporating Interpretable Output Constraints in Bayesian Neural
Networks [34.103445420814644]
Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification.
We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
arXiv Detail & Related papers (2020-10-21T13:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.