An Operator Learning Framework for Spatiotemporal Super-resolution of Scientific Simulations
- URL: http://arxiv.org/abs/2311.02328v2
- Date: Sun, 7 Apr 2024 01:36:29 GMT
- Title: An Operator Learning Framework for Spatiotemporal Super-resolution of Scientific Simulations
- Authors: Valentin Duruisseaux, Amit Chakraborty,
- Abstract summary: The Super Resolution Operator Network (SRNet) frames super-resolution as an operator learning problem.
It draws inspiration from existing operator learning problems to learn continuous representations of parametric differential equations from low-resolution approximations.
No restrictions are imposed on the locations of sensors at which the low-resolution approximations are provided.
- Score: 3.921076451326108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In numerous contexts, high-resolution solutions to partial differential equations are required to capture faithfully essential dynamics which occur at small spatiotemporal scales, but these solutions can be very difficult and slow to obtain using traditional methods due to limited computational resources. A recent direction to circumvent these computational limitations is to use machine learning techniques for super-resolution, to reconstruct high-resolution numerical solutions from low-resolution simulations which can be obtained more efficiently. The proposed approach, the Super Resolution Operator Network (SROpNet), frames super-resolution as an operator learning problem and draws inspiration from existing architectures to learn continuous representations of solutions to parametric differential equations from low-resolution approximations, which can then be evaluated at any desired location. In addition, no restrictions are imposed on the locations of (the fixed number of) spatiotemporal sensors at which the low-resolution approximations are provided, thereby enabling the consideration of a broader spectrum of problems arising in practice, for which many existing super-resolution approaches are not well-suited.
Related papers
- Successive Refinement in Large-Scale Computation: Advancing Model
Inference Applications [67.76749044675721]
We introduce solutions for layered-resolution computation.
These solutions allow lower-resolution results to be obtained at an earlier stage than the final result.
arXiv Detail & Related papers (2024-02-11T15:36:33Z) - A Block-Coordinate Approach of Multi-level Optimization with an
Application to Physics-Informed Neural Networks [0.0]
We propose a multi-level algorithm for the solution of nonlinear optimization problems and analyze its evaluation complexity.
We apply it to the solution of partial differential equations using physics-informed neural networks (PINNs) and show on a few test problems that the approach results in better solutions and significant computational savings.
arXiv Detail & Related papers (2023-05-23T19:12:02Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Spatio-Temporal Super-Resolution of Dynamical Systems using
Physics-Informed Deep-Learning [0.0]
We propose a physics-informed deep learning-based framework to enhance spatial and temporal resolution of PDE solutions.
The framework consists of two trainable modules independently super-resolve (both in space and time) PDE solutions.
The proposed framework is well-suited for integration with traditional numerical methods to reduce computational complexity during engineering design.
arXiv Detail & Related papers (2022-12-08T18:30:18Z) - Neural Solvers for Fast and Accurate Numerical Optimal Control [12.80824586913772]
This paper provides techniques to improve the quality of optimized control policies given a fixed computational budget.
We achieve the above via a hypersolvers approach, which hybridizes a differential equation solver and a neural network.
arXiv Detail & Related papers (2022-03-13T10:46:50Z) - Learning Proximal Operators to Discover Multiple Optima [66.98045013486794]
We present an end-to-end method to learn the proximal operator across non-family problems.
We show that for weakly-ized objectives and under mild conditions, the method converges globally.
arXiv Detail & Related papers (2022-01-28T05:53:28Z) - Quadratic Unconstrained Binary Optimisation via Quantum-Inspired
Annealing [58.720142291102135]
We present a classical algorithm to find approximate solutions to instances of quadratic unconstrained binary optimisation.
We benchmark our approach for large scale problem instances with tuneable hardness and planted solutions.
arXiv Detail & Related papers (2021-08-18T09:26:17Z) - Deep Learning for Efficient Reconstruction of High-Resolution Turbulent
DNS Data [0.0]
Large Eddy Simulation (LES) presents a more computationally efficient approach for solving fluid flows on lower-resolution (LR) grids.
We introduce a novel deep learning framework SR-DNS Net, which aims to mitigate this inherent trade-off between solution fidelity and computational complexity.
Our model efficiently reconstructs the high-fidelity DNS data from the LES like low-resolution solutions while yielding good reconstruction metrics.
arXiv Detail & Related papers (2020-10-21T23:37:58Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Optimizing Wireless Systems Using Unsupervised and
Reinforced-Unsupervised Deep Learning [96.01176486957226]
Resource allocation and transceivers in wireless networks are usually designed by solving optimization problems.
In this article, we introduce unsupervised and reinforced-unsupervised learning frameworks for solving both variable and functional optimization problems.
arXiv Detail & Related papers (2020-01-03T11:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.