Approximate Differentiable Rendering with Algebraic Surfaces
- URL: http://arxiv.org/abs/2207.10606v1
- Date: Thu, 21 Jul 2022 16:59:54 GMT
- Title: Approximate Differentiable Rendering with Algebraic Surfaces
- Authors: Leonid Keselman, Martial Hebert
- Abstract summary: Fuzzy Metaballs is an approximate differentiable for a compact, interpretable representation.
Our approximate focuses on rendering shapes via depth maps and silhouettes.
Compared to mesh-based differentiables, our method has forward passes that are 5x faster and backwards passes that are 30x faster.
- Score: 24.7500811470085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentiable renderers provide a direct mathematical link between an
object's 3D representation and images of that object. In this work, we develop
an approximate differentiable renderer for a compact, interpretable
representation, which we call Fuzzy Metaballs. Our approximate renderer focuses
on rendering shapes via depth maps and silhouettes. It sacrifices fidelity for
utility, producing fast runtimes and high-quality gradient information that can
be used to solve vision tasks. Compared to mesh-based differentiable renderers,
our method has forward passes that are 5x faster and backwards passes that are
30x faster. The depth maps and silhouette images generated by our method are
smooth and defined everywhere. In our evaluation of differentiable renderers
for pose estimation, we show that our method is the only one comparable to
classic techniques. In shape from silhouette, our method performs well using
only gradient descent and a per-pixel loss, without any surrogate losses or
regularization. These reconstructions work well even on natural video sequences
with segmentation artifacts. Project page:
https://leonidk.github.io/fuzzy-metaballs
Related papers
- EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Flexible Techniques for Differentiable Rendering with 3D Gaussians [29.602516169951556]
Neural Radiance Fields demonstrated photorealistic novel view is within reach, but was gated by performance requirements for fast reconstruction of real scenes and objects.
We develop extensions to alternative shape representations, in particular, 3D watertight meshes and rendering per-ray normals.
These reconstructions are quick, robust, and easily performed on GPU or CPU.
arXiv Detail & Related papers (2023-08-28T17:38:31Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.