A Physics-Guided Bi-Fidelity Fourier-Featured Operator Learning
Framework for Predicting Time Evolution of Drag and Lift Coefficients
- URL: http://arxiv.org/abs/2311.03639v1
- Date: Tue, 7 Nov 2023 00:56:54 GMT
- Title: A Physics-Guided Bi-Fidelity Fourier-Featured Operator Learning
Framework for Predicting Time Evolution of Drag and Lift Coefficients
- Authors: Amirhossein Mollaali, Izzet Sahin, Iqrar Raza, Christian Moya,
Guillermo Paniagua, Guang Lin
- Abstract summary: This paper proposes a deep operator learning-based framework that requires a limited high-fidelity dataset for training.
We introduce a novel physics-guided, bi-fidelity, Fourier-featured Deep Operator Network (DeepONet) framework that effectively combines low and high-fidelity datasets.
We validate our approach using a well-known 2D benchmark cylinder problem, which aims to predict the time trajectories of lift and drag coefficients.
- Score: 4.584598411021565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the pursuit of accurate experimental and computational data while
minimizing effort, there is a constant need for high-fidelity results. However,
achieving such results often requires significant computational resources. To
address this challenge, this paper proposes a deep operator learning-based
framework that requires a limited high-fidelity dataset for training. We
introduce a novel physics-guided, bi-fidelity, Fourier-featured Deep Operator
Network (DeepONet) framework that effectively combines low and high-fidelity
datasets, leveraging the strengths of each. In our methodology, we began by
designing a physics-guided Fourier-featured DeepONet, drawing inspiration from
the intrinsic physical behavior of the target solution. Subsequently, we train
this network to primarily learn the low-fidelity solution, utilizing an
extensive dataset. This process ensures a comprehensive grasp of the
foundational solution patterns. Following this foundational learning, the
low-fidelity deep operator network's output is enhanced using a physics-guided
Fourier-featured residual deep operator network. This network refines the
initial low-fidelity output, achieving the high-fidelity solution by employing
a small high-fidelity dataset for training. Notably, in our framework, we
employ the Fourier feature network as the Trunk network for the DeepONets,
given its proficiency in capturing and learning the oscillatory nature of the
target solution with high precision. We validate our approach using a
well-known 2D benchmark cylinder problem, which aims to predict the time
trajectories of lift and drag coefficients. The results highlight that the
physics-guided Fourier-featured deep operator network, serving as a
foundational building block of our framework, possesses superior predictive
capability for the lift and drag coefficients compared to its data-driven
counterparts.
Related papers
- Component-based Sketching for Deep ReLU Nets [55.404661149594375]
We develop a sketching scheme based on deep net components for various tasks.
We transform deep net training into a linear empirical risk minimization problem.
We show that the proposed component-based sketching provides almost optimal rates in approximating saturated functions.
arXiv Detail & Related papers (2024-09-21T15:30:43Z) - Efficient Training of Deep Neural Operator Networks via Randomized Sampling [0.0]
Deep operator network (DeepNet) has demonstrated success in the real-time prediction of complex dynamics across various scientific and engineering applications.
We introduce a random sampling technique to be adopted the training of DeepONet, aimed at improving generalization ability of the model, while significantly reducing computational time.
Our results indicate that incorporating randomization in the trunk network inputs during training enhances the efficiency and robustness of DeepONet, offering a promising avenue for improving the framework's performance in modeling complex physical systems.
arXiv Detail & Related papers (2024-09-20T07:18:31Z) - From Fourier to Neural ODEs: Flow Matching for Modeling Complex Systems [20.006163951844357]
We propose a simulation-free framework for training neural ordinary differential equations (NODEs)
We employ the Fourier analysis to estimate temporal and potential high-order spatial gradients from noisy observational data.
Our approach outperforms state-of-the-art methods in terms of training time, dynamics prediction, and robustness.
arXiv Detail & Related papers (2024-05-19T13:15:23Z) - Neural Network Pruning by Gradient Descent [7.427858344638741]
We introduce a novel and straightforward neural network pruning framework that incorporates the Gumbel-Softmax technique.
We demonstrate its exceptional compression capability, maintaining high accuracy on the MNIST dataset with only 0.15% of the original network parameters.
We believe our method opens a promising new avenue for deep learning pruning and the creation of interpretable machine learning systems.
arXiv Detail & Related papers (2023-11-21T11:12:03Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Fourier-DeepONet: Fourier-enhanced deep operator networks for full
waveform inversion with improved accuracy, generalizability, and robustness [4.186792090302649]
Full waveform inversion (FWI) infers the structure information from waveform data by solving a non- optimization problem.
Here, we develop a neural network (Fourier-DeepONet) for FWI with the generalization of sources, including the frequencies and locations of sources.
Our experiments demonstrate that Fourier-DeepONet obtains more accurate predictions of subsurface structures in a wide range of source parameters.
arXiv Detail & Related papers (2023-05-26T22:17:28Z) - Optimal transfer protocol by incremental layer defrosting [66.76153955485584]
Transfer learning is a powerful tool enabling model training with limited amounts of data.
The simplest transfer learning protocol is based on freezing" the feature-extractor layers of a network pre-trained on a data-rich source task.
We show that this protocol is often sub-optimal and the largest performance gain may be achieved when smaller portions of the pre-trained network are kept frozen.
arXiv Detail & Related papers (2023-03-02T17:32:11Z) - Functional Regularization for Reinforcement Learning via Learned Fourier
Features [98.90474131452588]
We propose a simple architecture for deep reinforcement learning by embedding inputs into a learned Fourier basis.
We show that it improves the sample efficiency of both state-based and image-based RL.
arXiv Detail & Related papers (2021-12-06T18:59:52Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - FG-Net: Fast Large-Scale LiDAR Point CloudsUnderstanding Network
Leveraging CorrelatedFeature Mining and Geometric-Aware Modelling [15.059508985699575]
FG-Net is a general deep learning framework for large-scale point clouds understanding without voxelizations.
We propose a deep convolutional neural network leveraging correlated feature mining and deformable convolution based geometric-aware modelling.
Our approaches outperform state-of-the-art approaches in terms of accuracy and efficiency.
arXiv Detail & Related papers (2020-12-17T08:20:09Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.