Visual Tomography: Physically Faithful Volumetric Models of Partially
Translucent Objects
- URL: http://arxiv.org/abs/2312.13494v1
- Date: Thu, 21 Dec 2023 00:14:46 GMT
- Title: Visual Tomography: Physically Faithful Volumetric Models of Partially
Translucent Objects
- Authors: David Nakath, Xiangyu Weng, Mengkun She, Kevin K\"oser
- Abstract summary: Digital 3D representations of objects can be useful for human or computer-assisted analysis.
We propose a volumetric reconstruction approach that obtains a physical model including the interior of partially translucent objects.
Our technique photographs the object under different poses in front of a bright white light source and computes absorption and scattering per voxel.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When created faithfully from real-world data, Digital 3D representations of
objects can be useful for human or computer-assisted analysis. Such models can
also serve for generating training data for machine learning approaches in
settings where data is difficult to obtain or where too few training data
exists, e.g. by providing novel views or images in varying conditions. While
the vast amount of visual 3D reconstruction approaches focus on non-physical
models, textured object surfaces or shapes, in this contribution we propose a
volumetric reconstruction approach that obtains a physical model including the
interior of partially translucent objects such as plankton or insects. Our
technique photographs the object under different poses in front of a bright
white light source and computes absorption and scattering per voxel. It can be
interpreted as visual tomography that we solve by inverse raytracing. We
additionally suggest a method to convert non-physical NeRF media into a
physically-based volumetric grid for initialization and illustrate the
usefulness of the approach using two real-world plankton validation sets, the
lab-scanned models being finally also relighted and virtually submerged in a
scenario with augmented medium and illumination conditions. Please visit the
project homepage at www.marine.informatik.uni-kiel.de/go/vito
Related papers
- RoSI: Recovering 3D Shape Interiors from Few Articulation Images [20.430308190444737]
We present a learning framework to recover the shape interiors of existing 3D models with only their exteriors from multi-view and multi-articulation images.
Our neural architecture is trained in a category-agnostic manner and it consists of a motion-aware multi-view analysis phase.
In addition, our method also predicts part articulations and is able to realize and even extrapolate the captured motions on the target 3D object.
arXiv Detail & Related papers (2023-04-13T08:45:26Z) - MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare [84.80956484848505]
MegaPose is a method to estimate the 6D pose of novel objects, that is, objects unseen during training.
We present a 6D pose refiner based on a render&compare strategy which can be applied to novel objects.
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
arXiv Detail & Related papers (2022-12-13T19:30:03Z) - Generative Deformable Radiance Fields for Disentangled Image Synthesis
of Topology-Varying Objects [52.46838926521572]
3D-aware generative models have demonstrated their superb performance to generate 3D neural radiance fields (NeRF) from a collection of monocular 2D images.
We propose a generative model for synthesizing radiance fields of topology-varying objects with disentangled shape and appearance variations.
arXiv Detail & Related papers (2022-09-09T08:44:06Z) - 6-DoF Pose Estimation of Household Objects for Robotic Manipulation: An
Accessible Dataset and Benchmark [17.493403705281008]
We present a new dataset for 6-DoF pose estimation of known objects, with a focus on robotic manipulation research.
We provide 3D scanned textured models of toy grocery objects, as well as RGBD images of the objects in challenging, cluttered scenes.
Using semi-automated RGBD-to-model texture correspondences, the images are annotated with ground truth poses that were verified empirically to be accurate to within a few millimeters.
We also propose a new pose evaluation metric called ADD-H based upon the Hungarian assignment algorithm that is robust to symmetries in object geometry without requiring their explicit enumeration.
arXiv Detail & Related papers (2022-03-11T01:19:04Z) - Aug3D-RPN: Improving Monocular 3D Object Detection by Synthetic Images
with Virtual Depth [64.29043589521308]
We propose a rendering module to augment the training data by synthesizing images with virtual-depths.
The rendering module takes as input the RGB image and its corresponding sparse depth image, outputs a variety of photo-realistic synthetic images.
Besides, we introduce an auxiliary module to improve the detection model by jointly optimizing it through a depth estimation task.
arXiv Detail & Related papers (2021-07-28T11:00:47Z) - Category Level Object Pose Estimation via Neural Analysis-by-Synthesis [64.14028598360741]
In this paper we combine a gradient-based fitting procedure with a parametric neural image synthesis module.
The image synthesis network is designed to efficiently span the pose configuration space.
We experimentally show that the method can recover orientation of objects with high accuracy from 2D images alone.
arXiv Detail & Related papers (2020-08-18T20:30:47Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Shape Prior Deformation for Categorical 6D Object Pose and Size
Estimation [62.618227434286]
We present a novel learning approach to recover the 6D poses and sizes of unseen object instances from an RGB-D image.
We propose a deep network to reconstruct the 3D object model by explicitly modeling the deformation from a pre-learned categorical shape prior.
arXiv Detail & Related papers (2020-07-16T16:45:05Z) - Learning Implicit Surface Light Fields [34.89812112073539]
Implicit representations of 3D objects have recently achieved impressive results on learning-based 3D reconstruction tasks.
We propose a novel implicit representation for capturing the visual appearance of an object in terms of its surface light field.
Our model is able to infer rich visual appearance including shadows and specular reflections.
arXiv Detail & Related papers (2020-03-27T13:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.