gsplat: An Open-Source Library for Gaussian Splatting
- URL: http://arxiv.org/abs/2409.06765v1
- Date: Tue, 10 Sep 2024 17:57:38 GMT
- Title: gsplat: An Open-Source Library for Gaussian Splatting
- Authors: Vickie Ye, Ruilong Li, Justin Kerr, Matias Turkulainen, Brent Yi, Zhuoyang Pan, Otto Seiskari, Jianbo Ye, Jeffrey Hu, Matthew Tancik, Angjoo Kanazawa,
- Abstract summary: gsplat is an open-source library designed for training and developing Gaussian Splatting methods.
It features a front-end with Python bindings compatible with the PyTorch library and a back-end with highly optimized kernels.
- Score: 28.65527747971257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: gsplat is an open-source library designed for training and developing Gaussian Splatting methods. It features a front-end with Python bindings compatible with the PyTorch library and a back-end with highly optimized CUDA kernels. gsplat offers numerous features that enhance the optimization of Gaussian Splatting models, which include optimization improvements for speed, memory, and convergence times. Experimental results demonstrate that gsplat achieves up to 10% less training time and 4x less memory than the original implementation. Utilized in several research projects, gsplat is actively maintained on GitHub. Source code is available at https://github.com/nerfstudio-project/gsplat under Apache License 2.0. We welcome contributions from the open-source community.
Related papers
- Mathematical Supplement for the $\texttt{gsplat}$ Library [31.200552171251708]
This report provides the mathematical details of the gsplat library, a modular toolbox for efficient differentiable Gaussian splatting.
It provides a self-contained reference for the computations involved in the forward and backward passes of differentiable Gaussian splatting.
arXiv Detail & Related papers (2023-12-04T18:50:41Z) - PockEngine: Sparse and Efficient Fine-tuning in a Pocket [62.955793932377524]
We introduce PockEngine: a tiny, sparse and efficient engine to enable fine-tuning on various edge devices.
PockEngine supports sparse backpropagation and sparsely updates the model with measured memory saving and latency reduction.
Remarkably, PockEngine enables fine-tuning LLaMav2-7B on NVIDIA Jetson AGX Orin at 550 tokens/s, 7.9$times$ faster than the PyTorch.
arXiv Detail & Related papers (2023-10-26T19:46:11Z) - UncertaintyPlayground: A Fast and Simplified Python Library for
Uncertainty Estimation [0.0]
UncertaintyPlayground is a Python library built on PyTorch and GPyTorch for uncertainty estimation in supervised learning tasks.
The library offers fast training for Gaussian and multi-modal outcome distributions.
It can visualize the prediction intervals of one or more instances.
arXiv Detail & Related papers (2023-10-23T18:36:54Z) - pyGSL: A Graph Structure Learning Toolkit [14.000763778781547]
pyGSL is a Python library that provides efficient implementations of state-of-the-art graph structure learning models.
pyGSL is written in GPU-friendly ways, allowing one to scale to much larger network tasks.
arXiv Detail & Related papers (2022-11-07T14:23:10Z) - Stochastic Gradient Descent without Full Data Shuffle [65.97105896033815]
CorgiPile is a hierarchical data shuffling strategy that avoids a full data shuffle while maintaining comparable convergence rate of SGD as if a full shuffle were performed.
Our results show that CorgiPile can achieve comparable convergence rate with the full shuffle based SGD for both deep learning and generalized linear models.
arXiv Detail & Related papers (2022-06-12T20:04:31Z) - Repro: An Open-Source Library for Improving the Reproducibility and
Usability of Publicly Available Research Code [74.28810048824519]
Repro is an open-source library which aims at improving the usability of research code.
It provides a lightweight Python API for running software released by researchers within Docker containers.
arXiv Detail & Related papers (2022-04-29T01:54:54Z) - ReservoirComputing.jl: An Efficient and Modular Library for Reservoir
Computing Models [0.17499351967216337]
ReservoirComputing.jl is an open source Julia library for reservoir computing models.
The code and documentation are hosted on Github under an MIT license.
arXiv Detail & Related papers (2022-04-08T13:33:09Z) - Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$ [118.04625413322827]
$texttt5x$ and $texttseqio$ are open source software libraries for building and training language models.
These libraries have been used to train models with hundreds of billions of parameters on datasets with multiple terabytes of training data.
arXiv Detail & Related papers (2022-03-31T17:12:13Z) - Kernel Operations on the GPU, with Autodiff, without Memory Overflows [5.669790037378094]
The KeOps library provides a fast and memory-efficient GPU support for tensors whose entries are given by a mathematical formula.
KeOps alleviates the major bottleneck of tensor-centric libraries for kernel and geometric applications: memory consumption.
KeOps combines optimized C++/CUDA schemes with binders for high-level languages: Python (Numpy and PyTorch), Matlab and R.
arXiv Detail & Related papers (2020-03-27T08:54:10Z) - MOGPTK: The Multi-Output Gaussian Process Toolkit [71.08576457371433]
We present MOGPTK, a Python package for multi-channel data modelling using Gaussian processes (GP)
The aim of this toolkit is to make multi-output GP (MOGP) models accessible to researchers, data scientists, and practitioners alike.
arXiv Detail & Related papers (2020-02-09T23:34:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.