Posterior Regularized Bayesian Neural Network Incorporating Soft and
Hard Knowledge Constraints
- URL: http://arxiv.org/abs/2210.08608v1
- Date: Sun, 16 Oct 2022 18:58:50 GMT
- Title: Posterior Regularized Bayesian Neural Network Incorporating Soft and
Hard Knowledge Constraints
- Authors: Jiayu Huang, Yutian Pang, Yongming Liu, Hao Yan
- Abstract summary: We propose a novel Posterior-Regularized Bayesian Neural Network (PR-BNN) model by incorporating different types of knowledge constraints.
Experiments in simulation and two case studies about aviation landing prediction and solar energy output prediction have shown the knowledge constraints and the performance improvement of the proposed model.
- Score: 12.050265348673078
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Networks (NNs) have been widely {used in supervised learning} due to
their ability to model complex nonlinear patterns, often presented in
high-dimensional data such as images and text. However, traditional NNs often
lack the ability for uncertainty quantification. Bayesian NNs (BNNS) could help
measure the uncertainty by considering the distributions of the NN model
parameters. Besides, domain knowledge is commonly available and could improve
the performance of BNNs if it can be appropriately incorporated. In this work,
we propose a novel Posterior-Regularized Bayesian Neural Network (PR-BNN) model
by incorporating different types of knowledge constraints, such as the soft and
hard constraints, as a posterior regularization term. Furthermore, we propose
to combine the augmented Lagrangian method and the existing BNN solvers for
efficient inference. The experiments in simulation and two case studies about
aviation landing prediction and solar energy output prediction have shown the
knowledge constraints and the performance improvement of the proposed model
over traditional BNNs without the constraints.
Related papers
- Bayesian Entropy Neural Networks for Physics-Aware Prediction [14.705526856205454]
We introduce BENN, a framework designed to impose constraints on Bayesian Neural Network (BNN) predictions.
Benn is capable of constraining not only the predicted values but also their derivatives and variances, ensuring a more robust and reliable model output.
Results highlight significant improvements over traditional BNNs and showcase competitive performance relative to contemporary constrained deep learning methods.
arXiv Detail & Related papers (2024-07-01T07:00:44Z) - Neural Network Verification with Branch-and-Bound for General Nonlinearities [63.39918329535165]
Branch-and-bound (BaB) is among the most effective techniques for neural network (NN) verification.
We develop a general framework, named GenBaB, to conduct BaB on general nonlinearities to verify NNs with general architectures.
We demonstrate the effectiveness of our GenBaB on verifying a wide range of NNs, including NNs with activation functions such as Sigmoid, Tanh, Sine and GeLU.
arXiv Detail & Related papers (2024-05-31T17:51:07Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty
from Pre-trained Models [40.38541033389344]
Deep Neural Networks (DNNs) are powerful tools for various computer vision tasks, yet they often struggle with reliable uncertainty quantification.
We introduce the Adaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to seamlessly transform DNNs into BNNs.
We conduct extensive experiments across multiple datasets for image classification and semantic segmentation tasks, and our results demonstrate that ABNN achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-12-23T16:39:24Z) - Scaling Model Checking for DNN Analysis via State-Space Reduction and
Input Segmentation (Extended Version) [12.272381003294026]
Existing frameworks provide robustness and/or safety guarantees for the trained NNs.
We proposed FANNet, the first model checking-based framework for analyzing a broader range of NN properties.
This work develops state-space reduction and input segmentation approaches, to improve the scalability and timing efficiency of formal NN analysis.
arXiv Detail & Related papers (2023-06-29T22:18:07Z) - Masked Bayesian Neural Networks : Theoretical Guarantee and its
Posterior Inference [1.2722697496405464]
We propose a new node-sparse BNN model which has good theoretical properties and is computationally feasible.
We prove that the posterior concentration rate to the true model is near minimax optimal and adaptive to the smoothness of the true model.
In addition, we develop a novel MCMC algorithm which makes the Bayesian inference of the node-sparse BNN model feasible in practice.
arXiv Detail & Related papers (2023-05-24T06:16:11Z) - Incorporating Unlabelled Data into Bayesian Neural Networks [48.25555899636015]
We introduce Self-Supervised Bayesian Neural Networks, which use unlabelled data to learn models with suitable prior predictive distributions.
We show that the prior predictive distributions of self-supervised BNNs capture problem semantics better than conventional BNN priors.
Our approach offers improved predictive performance over conventional BNNs, especially in low-budget regimes.
arXiv Detail & Related papers (2023-04-04T12:51:35Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Incorporating Interpretable Output Constraints in Bayesian Neural
Networks [34.103445420814644]
Output-Constrained BNN (OC-BNN) is fully consistent with the Bayesian framework for uncertainty quantification.
We demonstrate the efficacy of OC-BNNs on real-world datasets, spanning multiple domains such as healthcare, criminal justice, and credit scoring.
arXiv Detail & Related papers (2020-10-21T13:00:05Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.