Prediction and Uncertainty Quantification of SAFARI-1 Axial Neutron Flux
Profiles with Neural Networks
- URL: http://arxiv.org/abs/2211.08654v1
- Date: Wed, 16 Nov 2022 04:14:13 GMT
- Title: Prediction and Uncertainty Quantification of SAFARI-1 Axial Neutron Flux
Profiles with Neural Networks
- Authors: Lesego E. Moloko, Pavel M. Bokov, Xu Wu, Kostadin N. Ivanov
- Abstract summary: Deep Neural Networks (DNNs) are used to predict the assembly axial neutron flux profiles in the SAFARI-1 research reactor.
Uncertainty Quantification of the regular DNN models' predictions is performed using Monte Carlo Dropout (MCD) and Bayesian Neural Networks solved by Variational Inference (BNN VI)
- Score: 1.4528756508275622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Neural Networks (ANNs) have been successfully used in various
nuclear engineering applications, such as predicting reactor physics parameters
within reasonable time and with a high level of accuracy. Despite this success,
they cannot provide information about the model prediction uncertainties,
making it difficult to assess ANN prediction credibility, especially in
extrapolated domains. In this study, Deep Neural Networks (DNNs) are used to
predict the assembly axial neutron flux profiles in the SAFARI-1 research
reactor, with quantified uncertainties in the ANN predictions and extrapolation
to cycles not used in the training process. The training dataset consists of
copper-wire activation measurements, the axial measurement locations and the
measured control bank positions obtained from the reactor's historical cycles.
Uncertainty Quantification of the regular DNN models' predictions is performed
using Monte Carlo Dropout (MCD) and Bayesian Neural Networks solved by
Variational Inference (BNN VI). The regular DNNs, DNNs solved with MCD and BNN
VI results agree very well among each other as well as with the new measured
dataset not used in the training process, thus indicating good prediction and
generalization capability. The uncertainty bands produced by MCD and BNN VI
agree very well, and in general, they can fully envelop the noisy measurement
data points. The developed ANNs are useful in supporting the experimental
measurements campaign and neutronics code Verification and Validation (V&V).
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Learning from Predictions: Fusing Training and Autoregressive Inference
for Long-Term Spatiotemporal Forecasts [4.068387278512612]
We propose the Scheduled Autoregressive BPTT (BPTT-SA) algorithm for predicting complex systems.
Our results show that BPTT-SA effectively reduces iterative error propagation in Convolutional RNNs and Convolutional Autoencoder RNNs.
arXiv Detail & Related papers (2023-02-22T02:46:54Z) - Satellite Anomaly Detection Using Variance Based Genetic Ensemble of
Neural Networks [7.848121055546167]
We use an efficient ensemble of the predictions from multiple Recurrent Neural Networks (RNNs)
For prediction, each RNN is guided by a Genetic Algorithm (GA) which constructs the optimal structure for each RNN model.
This paper uses the Monte Carlo (MC) dropout as an approximation version of BNNs.
arXiv Detail & Related papers (2023-02-10T22:09:00Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Statistical model-based evaluation of neural networks [74.10854783437351]
We develop an experimental setup for the evaluation of neural networks (NNs)
The setup helps to benchmark a set of NNs vis-a-vis minimum-mean-square-error (MMSE) performance bounds.
This allows us to test the effects of training data size, data dimension, data geometry, noise, and mismatch between training and testing conditions.
arXiv Detail & Related papers (2020-11-18T00:33:24Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.