OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction
- URL: http://arxiv.org/abs/2406.08894v1
- Date: Thu, 13 Jun 2024 07:46:17 GMT
- Title: OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction
- Authors: Zheng Dang, Jialu Huang, Fei Wang, Mathieu Salzmann,
- Abstract summary: We introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials.
OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask.
It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials.
- Score: 54.706361479680055
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in deep learning such as neural radiance fields and implicit neural representations have significantly propelled the field of 3D reconstruction. However, accurately reconstructing objects with complex optical properties, such as metals and glass, remains a formidable challenge due to their unique specular and light-transmission characteristics. To facilitate the development of solutions to these challenges, we introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials-including conductors, dielectrics, plastics, and their roughened variants- and captured under 723 diverse lighting conditions. To this end, we utilized physics-based rendering with laboratory-measured Indices of Refraction (IOR) and generated high-fidelity multiview images that closely replicate real-world objects. OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask. It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials, thereby paving the way for the development of 3D reconstruction algorithms capable of handling complex material properties.
Related papers
- RMAFF-PSN: A Residual Multi-Scale Attention Feature Fusion Photometric Stereo Network [37.759675702107586]
Predicting accurate maps of objects from two-dimensional images in regions of complex structure spatial material variations is challenging.
We propose a method of calibrated feature information from different resolution stages and scales of the image.
This approach preserves more physical information, such as texture and geometry of the object in complex regions.
arXiv Detail & Related papers (2024-04-11T14:05:37Z) - Zero-Shot Multi-Object Shape Completion [59.325611678171974]
We present a 3D shape completion method that recovers the complete geometry of multiple objects in complex scenes from a single RGB-D image.
Our method outperforms the current state-of-the-art on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-03-21T17:59:59Z) - Neural-PBIR Reconstruction of Shape, Material, and Illumination [26.628189591572074]
We introduce an accurate and highly efficient object reconstruction pipeline combining neural based object reconstruction and physics-based inverse rendering (PBIR)
Our pipeline firstly leverages a neural SDF based shape reconstruction to produce high-quality but potentially imperfect object shape.
In the last stage, by the neural predictions, we perform PBIR to refine the initial results and obtain the final high-quality reconstruction of object shape, material, and illumination.
arXiv Detail & Related papers (2023-04-26T11:02:04Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing [16.975014467319443]
Differentiable path tracing is an appealing framework as it can reproduce complex appearance effects.
We show how to use differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet material representation.
We also show how to refine initial reconstructions of real-world objects in unconstrained environments.
arXiv Detail & Related papers (2020-12-06T18:55:35Z) - 3DMaterialGAN: Learning 3D Shape Representation from Latent Space for
Materials Science Applications [7.449993399792031]
3DMaterialGAN is capable of recognizing and synthesizing individual grains whose morphology conforms to a given 3D polycrystalline material microstructure.
We show that this method performs comparably or better than state-of-the-art on benchmark annotated 3D datasets.
This framework lays the foundation for the recognition and synthesis of polycrystalline material microstructures.
arXiv Detail & Related papers (2020-07-27T21:55:16Z) - Through the Looking Glass: Neural 3D Reconstruction of Transparent
Shapes [75.63464905190061]
Complex light paths induced by refraction and reflection have prevented both traditional and deep multiview stereo from solving this problem.
We propose a physically-based network to recover 3D shape of transparent objects using a few images acquired with a mobile phone camera.
Our experiments show successful recovery of high-quality 3D geometry for complex transparent shapes using as few as 5-12 natural images.
arXiv Detail & Related papers (2020-04-22T23:51:30Z) - Multiview Neural Surface Reconstruction by Disentangling Geometry and
Appearance [46.488713939892136]
We introduce a neural network that simultaneously learns the unknown geometry, camera parameters, and a neural architecture that approximates the light reflected from the surface towards the camera.
We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera materials from the DTU MVS dataset.
arXiv Detail & Related papers (2020-03-22T10:20:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.