Transported Memory Networks accelerating Computational Fluid Dynamics
- URL: http://arxiv.org/abs/2502.18591v1
- Date: Tue, 25 Feb 2025 19:12:48 GMT
- Title: Transported Memory Networks accelerating Computational Fluid Dynamics
- Authors: Matthias Schulz, Gwendal Jouan, Daniel Berger, Stefan Gavranovic, Dirk Hartmann,
- Abstract summary: Transported Memory Networks is a novel architecture that draws inspiration from both traditional turbulence models and recurrent neural networks.<n>Our results show that it is point-wise and statistically comparable to, or improves upon, previous methods in terms of both accuracy and computational efficiency.
- Score: 0.0699049312989311
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, augmentation of differentiable PDE solvers with neural networks has shown promising results, particularly in fluid simulations. However, most approaches rely on convolutional neural networks and custom solvers operating on Cartesian grids with efficient access to cell data. This particular choice poses challenges for industrial-grade solvers that operate on unstructured meshes, where access is restricted to neighboring cells only. In this work, we address this limitation using a novel architecture, named Transported Memory Networks. The architecture draws inspiration from both traditional turbulence models and recurrent neural networks, and it is fully compatible with generic discretizations. Our results show that it is point-wise and statistically comparable to, or improves upon, previous methods in terms of both accuracy and computational efficiency.
Related papers
- EvSegSNN: Neuromorphic Semantic Segmentation for Event Data [0.6138671548064356]
EvSegSNN is a biologically plausible encoder-decoder U-shaped architecture relying on Parametric Leaky Integrate and Fire neurons.
We introduce an end-to-end biologically inspired semantic segmentation approach by combining Spiking Neural Networks with event cameras.
Experiments conducted on DDD17 demonstrate that EvSegSNN outperforms the closest state-of-the-art model in terms of MIoU.
arXiv Detail & Related papers (2024-06-20T10:36:24Z) - Regularized PolyKervNets: Optimizing Expressiveness and Efficiency for
Private Inference in Deep Neural Networks [0.0]
We focus on PolyKervNets, a technique known for offering improved dynamic approximations in smaller networks.
Our primary objective is to empirically explore optimization-based training recipes to enhance the performance of PolyKervNets in larger networks.
arXiv Detail & Related papers (2023-12-23T11:37:18Z) - Neural Network with Local Converging Input (NNLCI) for Supersonic Flow
Problems with Unstructured Grids [0.9152133607343995]
We develop a neural network with local converging input (NNLCI) for high-fidelity prediction using unstructured data.
As a validation case, the NNLCI method is applied to study inviscid supersonic flows in channels with bumps.
arXiv Detail & Related papers (2023-10-23T19:03:37Z) - Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.<n>Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Go Beyond Multiple Instance Neural Networks: Deep-learning Models based
on Local Pattern Aggregation [0.0]
convolutional neural networks (CNNs) have brought breakthroughs in processing clinical electrocardiograms (ECGs) and speaker-independent speech.
In this paper, we propose local pattern aggregation-based deep-learning models to effectively deal with both problems.
The novel network structure, called LPANet, has cropping and aggregation operations embedded into it.
arXiv Detail & Related papers (2022-05-28T13:18:18Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.