Lettuce: PyTorch-based Lattice Boltzmann Framework
- URL: http://arxiv.org/abs/2106.12929v1
- Date: Thu, 24 Jun 2021 11:44:21 GMT
- Title: Lettuce: PyTorch-based Lattice Boltzmann Framework
- Authors: Mario Christopher Bedrunka, Dominik Wilde, Martin Kliemank, Dirk
Reith, Holger Foysi, Andreas Kr\"amer
- Abstract summary: The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond.
Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The lattice Boltzmann method (LBM) is an efficient simulation technique for
computational fluid mechanics and beyond. It is based on a simple
stream-and-collide algorithm on Cartesian grids, which is easily compatible
with modern machine learning architectures. While it is becoming increasingly
clear that deep learning can provide a decisive stimulus for classical
simulation techniques, recent studies have not addressed possible connections
between machine learning and LBM. Here, we introduce Lettuce, a PyTorch-based
LBM code with a threefold aim. Lettuce enables GPU accelerated calculations
with minimal source code, facilitates rapid prototyping of LBM models, and
enables integrating LBM simulations with PyTorch's deep learning and automatic
differentiation facility. As a proof of concept for combining machine learning
with the LBM, a neural collision model is developed, trained on a doubly
periodic shear layer and then transferred to a different flow, a decaying
turbulence. We also exemplify the added benefit of PyTorch's automatic
differentiation framework in flow control and optimization. To this end, the
spectrum of a forced isotropic turbulence is maintained without further
constraining the velocity field. The source code is freely available from
https://github.com/lettucecfd/lettuce.
Related papers
- Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment [56.44025052765861]
Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks.
We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs.
We show a total speedup on CPUs for sparse-quantized LLaMA models of up to 8.6x.
arXiv Detail & Related papers (2024-05-06T16:03:32Z) - JAX-SPH: A Differentiable Smoothed Particle Hydrodynamics Framework [8.977530522693444]
Particle-based fluid simulations have emerged as a powerful tool for solving the Navier-Stokes equations.
Recent addition of machine learning methods to the toolbox for solving such problems is pushing the boundary of the quality vs. speed tradeoff.
We lead the way to Lagrangian fluid simulators compatible with deep learning frameworks, and propose JAX-SPH.
arXiv Detail & Related papers (2024-03-07T18:53:53Z) - LeTO: Learning Constrained Visuomotor Policy with Differentiable Trajectory Optimization [1.1602089225841634]
This paper introduces LeTO, a method for learning constrained visuomotor policy with differentiable trajectory optimization.
We quantitatively evaluate LeTO in simulation and in the real robot.
arXiv Detail & Related papers (2024-01-30T23:18:35Z) - Differentiable Turbulence II [0.0]
We develop a framework for integrating deep learning models into a generic finite element numerical scheme for solving the Navier-Stokes equations.
We show that the learned closure can achieve accuracy comparable to traditional large eddy simulation on a finer grid that amounts to an equivalent speedup of 10x.
arXiv Detail & Related papers (2023-07-25T14:27:49Z) - In Situ Framework for Coupling Simulation and Machine Learning with
Application to CFD [51.04126395480625]
Recent years have seen many successful applications of machine learning (ML) to facilitate fluid dynamic computations.
As simulations grow, generating new training datasets for traditional offline learning creates I/O and storage bottlenecks.
This work offers a solution by simplifying this coupling and enabling in situ training and inference on heterogeneous clusters.
arXiv Detail & Related papers (2023-06-22T14:07:54Z) - Learned multiphysics inversion with differentiable programming and
machine learning [1.8893605328938345]
We present the Seismic Laboratory for Imaging and Modeling/Monitoring (SLIM) open-source software framework for computational geophysics.
By integrating multiple layers of abstraction, our software is designed to be both readable and scalable.
arXiv Detail & Related papers (2023-04-12T03:38:22Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - A modular software framework for the design and implementation of
ptychography algorithms [55.41644538483948]
We present SciCom, a new ptychography software framework aiming at simulating ptychography datasets and testing state-of-the-art reconstruction algorithms.
Despite its simplicity, the software leverages accelerated processing through the PyTorch interface.
Results are shown on both synthetic and real datasets.
arXiv Detail & Related papers (2022-05-06T16:32:37Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Intelligent and Reconfigurable Architecture for KL Divergence Based
Online Machine Learning Algorithm [0.0]
Online machine learning (OML) algorithms do not need any training phase and can be deployed directly in an unknown environment.
Online machine learning (OML) algorithms do not need any training phase and can be deployed directly in an unknown environment.
arXiv Detail & Related papers (2020-02-18T16:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.