XLB: A differentiable massively parallel lattice Boltzmann library in Python
- URL: http://arxiv.org/abs/2311.16080v3
- Date: Tue, 2 Apr 2024 15:56:38 GMT
- Title: XLB: A differentiable massively parallel lattice Boltzmann library in Python
- Authors: Mohammadmehdi Ataei, Hesam Salehipour,
- Abstract summary: We introduce XLB library, a Python-based differentiable LBM library based on the JAX platform.
XLB's differentiability and data structure is compatible with the extensive JAX-based machine learning ecosystem.
XLB has been successfully scaled to handle simulations with billions of cells, achieving giga-scale lattice updates per second.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The lattice Boltzmann method (LBM) has emerged as a prominent technique for solving fluid dynamics problems due to its algorithmic potential for computational scalability. We introduce XLB library, a Python-based differentiable LBM library based on the JAX platform. The architecture of XLB is predicated upon ensuring accessibility, extensibility, and computational performance, enabling scaling effectively across CPU, TPU, multi-GPU, and distributed multi-GPU or TPU systems. The library can be readily augmented with novel boundary conditions, collision models, or multi-physics simulation capabilities. XLB's differentiability and data structure is compatible with the extensive JAX-based machine learning ecosystem, enabling it to address physics-based machine learning, optimization, and inverse problems. XLB has been successfully scaled to handle simulations with billions of cells, achieving giga-scale lattice updates per second. XLB is released under the permissive Apache-2.0 license and is available on GitHub at https://github.com/Autodesk/XLB.
Related papers
- Enabling High-Sparsity Foundational Llama Models with Efficient Pretraining and Deployment [56.44025052765861]
Large language models (LLMs) have revolutionized Natural Language Processing (NLP), but their size creates computational bottlenecks.
We introduce a novel approach to create accurate, sparse foundational versions of performant LLMs.
We show a total speedup on CPUs for sparse-quantized LLaMA models of up to 8.6x.
arXiv Detail & Related papers (2024-05-06T16:03:32Z) - Distributed Inference and Fine-tuning of Large Language Models Over The
Internet [91.00270820533272]
Large language models (LLMs) are useful in many NLP tasks and become more capable with size.
These models require high-end hardware, making them inaccessible to most researchers.
We develop fault-tolerant inference algorithms and load-balancing protocols that automatically assign devices to maximize the total system throughput.
arXiv Detail & Related papers (2023-12-13T18:52:49Z) - JaxMARL: Multi-Agent RL Environments and Algorithms in JAX [105.343918678781]
We present JaxMARL, the first open-source, Python-based library that combines GPU-enabled efficiency with support for a large number of commonly used MARL environments.
Our experiments show that, in terms of wall clock time, our JAX-based training pipeline is around 14 times faster than existing approaches.
We also introduce and benchmark SMAX, a JAX-based approximate reimplementation of the popular StarCraft Multi-Agent Challenge.
arXiv Detail & Related papers (2023-11-16T18:58:43Z) - sQUlearn -- A Python Library for Quantum Machine Learning [0.0]
sQUlearn introduces a user-friendly, NISQ-ready Python library for quantum machine learning (QML)
The library's dual-layer architecture serves both QML researchers and practitioners.
arXiv Detail & Related papers (2023-11-15T14:22:53Z) - In Situ Framework for Coupling Simulation and Machine Learning with
Application to CFD [51.04126395480625]
Recent years have seen many successful applications of machine learning (ML) to facilitate fluid dynamic computations.
As simulations grow, generating new training datasets for traditional offline learning creates I/O and storage bottlenecks.
This work offers a solution by simplifying this coupling and enabling in situ training and inference on heterogeneous clusters.
arXiv Detail & Related papers (2023-06-22T14:07:54Z) - SequeL: A Continual Learning Library in PyTorch and JAX [50.33956216274694]
SequeL is a library for Continual Learning that supports both PyTorch and JAX frameworks.
It provides a unified interface for a wide range of Continual Learning algorithms, including regularization-based approaches, replay-based approaches, and hybrid approaches.
We release SequeL as an open-source library, enabling researchers and developers to easily experiment and extend the library for their own purposes.
arXiv Detail & Related papers (2023-04-21T10:00:22Z) - BayesSimIG: Scalable Parameter Inference for Adaptive Domain
Randomization with IsaacGym [59.53949960353792]
BayesSimIG is a library that provides an implementation of BayesSim integrated with the recently released NVIDIA IsaacGym.
BayesSimIG provides an integration with NVIDIABoard to easily visualize slices of high-dimensional posteriors.
arXiv Detail & Related papers (2021-07-09T16:21:31Z) - Lettuce: PyTorch-based Lattice Boltzmann Framework [0.0]
The lattice Boltzmann method (LBM) is an efficient simulation technique for computational fluid mechanics and beyond.
Here, we introduce Lettuce, a PyTorch-based LBM code with a threefold aim.
arXiv Detail & Related papers (2021-06-24T11:44:21Z) - LS-CAT: A Large-Scale CUDA AutoTuning Dataset [0.0]
We present how we build the LS-CAT (Large-Scale AutoTuning) dataset from GitHub.
Our dataset includes 19 683 kernels focused on linear algebra.
The runtime are GPU benchmarks on both Nvidia GTX 980 and Nvidia T4 systems.
arXiv Detail & Related papers (2021-03-26T11:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.