Differentiable Rendering with Perturbed Optimizers
- URL: http://arxiv.org/abs/2110.09107v1
- Date: Mon, 18 Oct 2021 08:56:23 GMT
- Title: Differentiable Rendering with Perturbed Optimizers
- Authors: Quentin Le Lidec, Ivan Laptev, Cordelia Schmid, Justin Carpentier
- Abstract summary: Reasoning about 3D scenes from their 2D image projections is one of the core problems in computer vision.
Our work highlights the link between some well-known differentiable formulations and randomly smoothed renderings.
We apply our method to 3D scene reconstruction and demonstrate its advantages on the tasks of 6D pose estimation and 3D mesh reconstruction.
- Score: 85.66675707599782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning about 3D scenes from their 2D image projections is one of the core
problems in computer vision. Solutions to this inverse and ill-posed problem
typically involve a search for models that best explain observed image data.
Notably, images depend both on the properties of observed scenes and on the
process of image formation. Hence, if optimization techniques should be used to
explain images, it is crucial to design differentiable functions for the
projection of 3D scenes into images, also known as differentiable rendering.
Previous approaches to differentiable rendering typically replace
non-differentiable operations by smooth approximations, impacting the
subsequent 3D estimation. In this paper, we take a more general approach and
study differentiable renderers through the prism of randomized optimization and
the related notion of perturbed optimizers. In particular, our work highlights
the link between some well-known differentiable renderer formulations and
randomly smoothed optimizers, and introduces differentiable perturbed
renderers. We also propose a variance reduction mechanism to alleviate the
computational burden inherent to perturbed optimizers and introduce an adaptive
scheme to automatically adjust the smoothing parameters of the rendering
process. We apply our method to 3D scene reconstruction and demonstrate its
advantages on the tasks of 6D pose estimation and 3D mesh reconstruction. By
providing informative gradients that can be used as a strong supervisory
signal, we demonstrate the benefits of perturbed renderers to obtain more
accurate solutions when compared to the state-of-the-art alternatives using
smooth gradient approximations.
Related papers
- Personalized 3D Human Pose and Shape Refinement [19.082329060985455]
regression-based methods have dominated the field of 3D human pose and shape estimation.
We propose to construct dense correspondences between initial human model estimates and the corresponding images.
We show that our approach not only consistently leads to better image-model alignment, but also to improved 3D accuracy.
arXiv Detail & Related papers (2024-03-18T10:13:53Z) - Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [71.44349029439944]
Recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed.
We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians.
We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering.
arXiv Detail & Related papers (2023-11-30T17:58:57Z) - Differentiable Rendering for Pose Estimation in Proximity Operations [4.282159812965446]
Differentiable rendering aims to compute the derivative of the image rendering function with respect to the rendering parameters.
This paper presents a novel algorithm for 6-DoF pose estimation using a differentiable rendering pipeline.
arXiv Detail & Related papers (2022-12-24T06:12:16Z) - Adaptive Joint Optimization for 3D Reconstruction with Differentiable
Rendering [22.2095090385119]
Given an imperfect reconstructed 3D model, most previous methods have focused on the refinement of either geometry, texture, or camera pose.
We propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework.
Using differentiable rendering, an image-level adversarial loss is applied to further improve the 3D model, making it more photorealistic.
arXiv Detail & Related papers (2022-08-15T04:32:41Z) - Differentiable Rendering for Synthetic Aperture Radar Imagery [0.0]
We propose an approach for differentiable rendering of Synthetic Aperture Radar (SAR) imagery, which combines methods from 3D computer graphics with neural rendering.
We demonstrate the approach on the inverse graphics problem of 3D Object Reconstruction from limited SAR imagery using high-fidelity simulated SAR data.
arXiv Detail & Related papers (2022-04-04T05:27:40Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Efficient and Differentiable Shadow Computation for Inverse Problems [64.70468076488419]
Differentiable geometric computation has received increasing interest for image-based inverse problems.
We propose an efficient yet efficient approach for differentiable visibility and soft shadow computation.
As our formulation is differentiable, it can be used to solve inverse problems such as texture, illumination, rigid pose, and deformation recovery from images.
arXiv Detail & Related papers (2021-04-01T09:29:05Z) - Geometric Correspondence Fields: Learned Differentiable Rendering for 3D
Pose Refinement in the Wild [96.09941542587865]
We present a novel 3D pose refinement approach based on differentiable rendering for objects of arbitrary categories in the wild.
In this way, we precisely align 3D models to objects in RGB images which results in significantly improved 3D pose estimates.
We evaluate our approach on the challenging Pix3D dataset and achieve up to 55% relative improvement compared to state-of-the-art refinement methods in multiple metrics.
arXiv Detail & Related papers (2020-07-17T12:34:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.