Mathematical Supplement for the $\texttt{gsplat}$ Library
- URL: http://arxiv.org/abs/2312.02121v1
- Date: Mon, 4 Dec 2023 18:50:41 GMT
- Title: Mathematical Supplement for the $\texttt{gsplat}$ Library
- Authors: Vickie Ye and Angjoo Kanazawa
- Abstract summary: This report provides the mathematical details of the gsplat library, a modular toolbox for efficient differentiable Gaussian splatting.
It provides a self-contained reference for the computations involved in the forward and backward passes of differentiable Gaussian splatting.
- Score: 31.200552171251708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This report provides the mathematical details of the gsplat library, a
modular toolbox for efficient differentiable Gaussian splatting, as proposed by
Kerbl et al. It provides a self-contained reference for the computations
involved in the forward and backward passes of differentiable Gaussian
splatting. To facilitate practical usage and development, we provide a user
friendly Python API that exposes each component of the forward and backward
passes in rasterization at github.com/nerfstudio-project/gsplat .
Related papers
- gsplat: An Open-Source Library for Gaussian Splatting [28.65527747971257]
gsplat is an open-source library designed for training and developing Gaussian Splatting methods.
It features a front-end with Python bindings compatible with the PyTorch library and a back-end with highly optimized kernels.
arXiv Detail & Related papers (2024-09-10T17:57:38Z) - ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [104.34751911174196]
We build a large-scale dataset of 3DGS using ShapeNet and ModelNet datasets.
Our dataset ShapeSplat consists of 65K objects from 87 unique categories.
We introduce textbftextitGaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - Gradients of Functions of Large Matrices [18.361820028457718]
We show how to differentiate workhorses of numerical linear algebra efficiently.
We derive previously unknown adjoint systems for Lanczos and Arnoldi iterations, implement them in JAX, and show that the resulting code can compete with Diffrax.
All this is achieved without any problem-specific code optimisation.
arXiv Detail & Related papers (2024-05-27T15:39:45Z) - Ensemble-based gradient inference for particle methods in optimization
and sampling [2.9005223064604078]
We propose an approach based on function evaluations and Bayesian inference to extract higher-order differential information.
We suggest to use this information for the improvement of established ensemble-based numerical methods for optimization and sampling.
arXiv Detail & Related papers (2022-09-23T09:21:35Z) - Captum: A unified and generic model interpretability library for PyTorch [49.72749684393332]
We introduce a novel, unified, open-source model interpretability library for PyTorch.
The library contains generic implementations of a number of gradient and perturbation-based attribution algorithms.
It can be used for both classification and non-classification models.
arXiv Detail & Related papers (2020-09-16T18:57:57Z) - Picasso: A Sparse Learning Library for High Dimensional Data Analysis in
R and Python [77.33905890197269]
We describe a new library which implements a unified pathwise coordinate optimization for a variety of sparse learning problems.
The library is coded in R++ and has user-friendly sparse experiments.
arXiv Detail & Related papers (2020-06-27T02:39:24Z) - MOGPTK: The Multi-Output Gaussian Process Toolkit [71.08576457371433]
We present MOGPTK, a Python package for multi-channel data modelling using Gaussian processes (GP)
The aim of this toolkit is to make multi-output GP (MOGP) models accessible to researchers, data scientists, and practitioners alike.
arXiv Detail & Related papers (2020-02-09T23:34:49Z) - Multi-layer Optimizations for End-to-End Data Analytics [71.05611866288196]
We introduce Iterative Functional Aggregate Queries (IFAQ), a framework that realizes an alternative approach.
IFAQ treats the feature extraction query and the learning task as one program given in the IFAQ's domain-specific language.
We show that a Scala implementation of IFAQ can outperform mlpack, Scikit, and specialization by several orders of magnitude for linear regression and regression tree models over several relational datasets.
arXiv Detail & Related papers (2020-01-10T16:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.