Super Resolution for Turbulent Flows in 2D: Stabilized Physics Informed
Neural Networks
- URL: http://arxiv.org/abs/2204.07413v1
- Date: Fri, 15 Apr 2022 10:22:56 GMT
- Title: Super Resolution for Turbulent Flows in 2D: Stabilized Physics Informed
Neural Networks
- Authors: Mykhaylo Zayats, Ma{\l}gorzata J. Zimo\'n, Kyongmin Yeo, Sergiy Zhuk
- Abstract summary: We propose a new design of a neural network for solving a zero shot super resolution problem for turbulent flows.
We embed Luenberger-type observer into the network's architecture to inform the network of the physics of the process.
- Score: 0.05735035463793007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new design of a neural network for solving a zero shot super
resolution problem for turbulent flows. We embed Luenberger-type observer into
the network's architecture to inform the network of the physics of the process,
and to provide error correction and stabilization mechanisms. In addition, to
compensate for decrease of observer's performance due to the presence of
unknown destabilizing forcing, the network is designed to estimate the
contribution of the unknown forcing implicitly from the data over the course of
training. By running a set of numerical experiments, we demonstrate that the
proposed network does recover unknown forcing from data and is capable of
predicting turbulent flows in high resolution from low resolution noisy
observations.
Related papers
- SING: Semantic Image Communications using Null-Space and INN-Guided Diffusion Models [52.40011613324083]
Joint source-channel coding systems (DeepJSCC) have recently demonstrated remarkable performance in wireless image transmission.
Existing methods focus on minimizing distortion between the transmitted image and the reconstructed version at the receiver, often overlooking perceptual quality.
We propose SING, a novel framework that formulates the recovery of high-quality images from corrupted reconstructions as an inverse problem.
arXiv Detail & Related papers (2025-03-16T12:32:11Z) - Improving hp-Variational Physics-Informed Neural Networks for Steady-State Convection-Dominated Problems [4.0974219394860505]
This paper studies two extensions of applying hp-variational physics-informed neural networks, more precisely the FastVPINNs framework, to convection-dominated convection-diffusion-reaction problems.
First, a term in the spirit of a SUPG stabilization is included in the loss functional and a network architecture is proposed that predicts spatially varying stabilization parameters.
The second novelty is the proposal of a network architecture that learns good parameters for a class of indicator functions.
arXiv Detail & Related papers (2024-11-14T10:21:41Z) - Residual resampling-based physics-informed neural network for neutron diffusion equations [7.105073499157097]
The neutron diffusion equation plays a pivotal role in the analysis of nuclear reactors.
Traditional PINN approaches often utilize fully connected network (FCN) architecture.
R2-PINN effectively overcomes the limitations inherent in current methods, providing more accurate and robust solutions for neutron diffusion equations.
arXiv Detail & Related papers (2024-06-23T13:49:31Z) - Uncertainty Propagation through Trained Deep Neural Networks Using
Factor Graphs [4.704825771757308]
Uncertainty propagation seeks to estimate aleatoric uncertainty by propagating input uncertainties to network predictions.
Motivated by the complex information flows within deep neural networks, we developed a novel approach by posing uncertainty propagation as a non-linear optimization problem using factor graphs.
arXiv Detail & Related papers (2023-12-10T17:26:27Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - To be or not to be stable, that is the question: understanding neural
networks for inverse problems [0.0]
In this paper, we theoretically analyze the trade-off between stability and accuracy of neural networks.
We propose different supervised and unsupervised solutions to increase the network stability and maintain a good accuracy.
arXiv Detail & Related papers (2022-11-24T16:16:40Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Distribution-sensitive Information Retention for Accurate Binary Neural
Network [49.971345958676196]
We present a novel Distribution-sensitive Information Retention Network (DIR-Net) to retain the information of the forward activations and backward gradients.
Our DIR-Net consistently outperforms the SOTA binarization approaches under mainstream and compact architectures.
We conduct our DIR-Net on real-world resource-limited devices which achieves 11.1 times storage saving and 5.4 times speedup.
arXiv Detail & Related papers (2021-09-25T10:59:39Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Towards Robust Neural Networks via Close-loop Control [12.71446168207573]
Deep neural networks are vulnerable to various perturbations due to their black-box nature.
Recent study has shown that a deep neural network can misclassify the data even if the input data is perturbed by an imperceptible amount.
arXiv Detail & Related papers (2021-02-03T03:50:35Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.