Prediction of the evolution of the nuclear reactor core parameters using
artificial neural network
- URL: http://arxiv.org/abs/2304.10337v2
- Date: Thu, 14 Dec 2023 10:10:48 GMT
- Title: Prediction of the evolution of the nuclear reactor core parameters using
artificial neural network
- Authors: Krzysztof Palmi, Wojciech Kubinski, Piotr Darnowski
- Abstract summary: A nuclear reactor based on MIT BEAVRS benchmark was used as a typical power generating Pressurized Water Reactor (PWR)
The PARCS v3.2 nodal-diffusion core simulator was used as a full-core reactor physics solver to emulate the operation of a reactor.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A nuclear reactor based on MIT BEAVRS benchmark was used as a typical power
generating Pressurized Water Reactor (PWR). The PARCS v3.2 nodal-diffusion core
simulator was used as a full-core reactor physics solver to emulate the
operation of a reactor and to generate training, and validation data for the
ANN. The ANN was implemented with dedicated Python 3.8 code with Google's
TensorFlow 2.0 library. The effort was based to a large extent on the process
of appropriate automatic transformation of data generated by PARCS simulator,
which was later used in the process of the ANN development. Various methods
that allow obtaining better accuracy of the ANN predicted results were studied,
such as trying different ANN architectures to find the optimal number of
neurons in the hidden layers of the network. Results were later compared with
the architectures proposed in the literature. For the selected best
architecture predictions were made for different core parameters and their
dependence on core loading patterns. In this study, a special focus was put on
the prediction of the fuel cycle length for a given core loading pattern, as it
can be considered one of the targets for plant economic operation. For
instance, the length of a single fuel cycle depending on the initial core
loading pattern was predicted with very good accuracy (>99%). This work
contributes to the exploration of the usefulness of neural networks in solving
nuclear reactor design problems. Thanks to the application of ANN, designers
can avoid using an excessive amount of core simulator runs and more rapidly
explore the space of possible solutions before performing more detailed design
considerations.
Related papers
- Data-Driven Prediction and Uncertainty Quantification of PWR Crud-Induced Power Shift Using Convolutional Neural Networks [2.147634833794939]
Crud-Induced Power Shift (CIPS) is an operational challenge in Pressurized Water Reactors.
This work proposes a top-down approach to predict CIPS instances on an assembly level with reactor-specific calibration built-in.
arXiv Detail & Related papers (2024-06-27T15:04:24Z) - FR-NAS: Forward-and-Reverse Graph Predictor for Efficient Neural Architecture Search [10.699485270006601]
We introduce a novel Graph Neural Networks (GNN) predictor for Neural Architecture Search (NAS)
This predictor renders neural architectures into vector representations by combining both the conventional and inverse graph views.
The experimental results showcase a significant improvement in prediction accuracy, with a 3%--16% increase in Kendall-tau correlation.
arXiv Detail & Related papers (2024-04-24T03:22:49Z) - Symbolic Regression on FPGAs for Fast Machine Learning Inference [2.0920303420933273]
High-energy physics community is investigating the potential of deploying machine-learning-based solutions on Field-Programmable Gate Arrays (FPGAs)
We introduce a novel end-to-end procedure that utilizes a machine learning technique called symbolic regression (SR)
We show that our approach can approximate a 3-layer neural network using an inference model that achieves up to a 13-fold decrease in execution time, down to 5 ns, while still preserving more than 90% approximation accuracy.
arXiv Detail & Related papers (2023-05-06T17:04:02Z) - Deep Learning Architectures for FSCV, a Comparison [0.0]
Suitability is determined by the predictive performance in the "out-of-probe" case, the response to artificially induced electrical noise, and the ability to predict when the model will be errant for a given probe.
The InceptionTime architecture, a deep convolutional neural network, has the best absolute predictive performance of the models tested but was more susceptible to noise.
A naive multilayer perceptron architecture had the second lowest prediction error and was less affected by the artificial noise, suggesting that convolutions may not be as important for this task as one might suspect.
arXiv Detail & Related papers (2022-12-05T00:20:10Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Empirical Models for Multidimensional Regression of Fission Systems [0.0]
We develop guidelines for developing empirical models for multidimensional regression of neutron transport.
An assessment of the accuracy and precision finds that the SVR, followed closely by ANN, performs the best.
arXiv Detail & Related papers (2021-05-30T22:53:39Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Kernel Based Progressive Distillation for Adder Neural Networks [71.731127378807]
Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption.
There is an accuracy drop when replacing all convolution filters by adder filters.
We present a novel method for further improving the performance of ANNs without increasing the trainable parameters.
arXiv Detail & Related papers (2020-09-28T03:29:19Z) - FBNetV3: Joint Architecture-Recipe Search using Predictor Pretraining [65.39532971991778]
We present an accuracy predictor that scores architecture and training recipes jointly, guiding both sample selection and ranking.
We run fast evolutionary searches in just CPU minutes to generate architecture-recipe pairs for a variety of resource constraints.
FBNetV3 makes up a family of state-of-the-art compact neural networks that outperform both automatically and manually-designed competitors.
arXiv Detail & Related papers (2020-06-03T05:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.