NeuMIP: Multi-Resolution Neural Materials
- URL: http://arxiv.org/abs/2104.02789v1
- Date: Tue, 6 Apr 2021 21:22:22 GMT
- Title: NeuMIP: Multi-Resolution Neural Materials
- Authors: Alexandr Kuznetsov, Krishna Mullia, Zexiang Xu, Milo\v{s} Ha\v{s}an
and Ravi Ramamoorthi
- Abstract summary: NeuMIP is a neural method for representing and rendering a variety of material appearances at different scales.
We generalize traditional mipmap pyramids to pyramids of neural textures, combined with a fully connected network.
We also introduce neural offsets, a novel method which allows rendering materials with intricate parallax effects without any tessellation.
- Score: 98.83749495351627
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose NeuMIP, a neural method for representing and rendering a variety
of material appearances at different scales. Classical prefiltering
(mipmapping) methods work well on simple material properties such as diffuse
color, but fail to generalize to normals, self-shadowing, fibers or more
complex microstructures and reflectances. In this work, we generalize
traditional mipmap pyramids to pyramids of neural textures, combined with a
fully connected network. We also introduce neural offsets, a novel method which
allows rendering materials with intricate parallax effects without any
tessellation. This generalizes classical parallax mapping, but is trained
without supervision by any explicit heightfield. Neural materials within our
system support a 7-dimensional query, including position, incoming and outgoing
direction, and the desired filter kernel size. The materials have small storage
(on the order of standard mipmapping except with more texture channels), and
can be integrated within common Monte-Carlo path tracing systems. We
demonstrate our method on a variety of materials, resulting in complex
appearance across levels of detail, with accurate parallax, self-shadowing, and
other effects.
Related papers
- HR Human: Modeling Human Avatars with Triangular Mesh and High-Resolution Textures from Videos [52.23323966700072]
We present a framework for acquiring human avatars that are attached with high-resolution physically-based material textures and mesh from monocular video.
Our method introduces a novel information fusion strategy to combine the information from the monocular video and synthesize virtual multi-view images.
Experiments show that our approach outperforms previous representations in terms of high fidelity, and this explicit result supports deployment on common triangulars.
arXiv Detail & Related papers (2024-05-18T11:49:09Z) - A Hierarchical Architecture for Neural Materials [13.144139872006287]
We introduce a neural appearance model that offers a new level of accuracy.
An inception-based core network structure captures material appearances at multiple scales.
We encode the inputs into frequency space, introduce a gradient-based loss, and employ it adaptive to the progress of the learning phase.
arXiv Detail & Related papers (2023-07-19T17:00:45Z) - NeuManifold: Neural Watertight Manifold Reconstruction with Efficient
and High-Quality Rendering Support [45.68296352822415]
We present a method for generating high-quality watertight manifold meshes from multi-view input images.
Our method combines the benefits of both worlds; we take the geometry obtained from neural fields, and further optimize the geometry as well as a compact neural texture representation.
arXiv Detail & Related papers (2023-05-26T17:59:21Z) - Neural Microfacet Fields for Inverse Rendering [54.15870869037466]
We present a method for recovering materials, geometry, and environment illumination from images of a scene.
Our method uses a microfacet reflectance model within a volumetric setting by treating each sample along the ray as a (potentially non-opaque) surface.
arXiv Detail & Related papers (2023-03-31T05:38:13Z) - TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering [54.35405028643051]
We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
arXiv Detail & Related papers (2023-03-27T10:07:52Z) - PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo [22.42916940712357]
We present a neural inverse rendering method for MVPS based on implicit representation.
Our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.
arXiv Detail & Related papers (2022-07-23T03:55:18Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing [16.975014467319443]
Differentiable path tracing is an appealing framework as it can reproduce complex appearance effects.
We show how to use differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet material representation.
We also show how to refine initial reconstructions of real-world objects in unconstrained environments.
arXiv Detail & Related papers (2020-12-06T18:55:35Z) - Pyramid Attention Networks for Image Restoration [124.34970277136061]
Self-similarity refers to the image prior widely used in image restoration algorithms.
Recent advanced deep convolutional neural network based methods for image restoration do not take full advantage of self-similarities.
We present a novel Pyramid Attention module for image restoration, which captures long-range feature correspondences from a multi-scale feature pyramid.
arXiv Detail & Related papers (2020-04-28T21:12:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.