ROSA: Reconstructing Object Shape and Appearance Textures by Adaptive Detail Transfer
- URL: http://arxiv.org/abs/2501.18595v1
- Date: Thu, 30 Jan 2025 18:59:54 GMT
- Title: ROSA: Reconstructing Object Shape and Appearance Textures by Adaptive Detail Transfer
- Authors: Julian Kaltheuner, Patrick Stotko, Reinhard Klein,
- Abstract summary: We present an inverse rendering method that directly optimize mesh geometry with spatially adaptive mesh resolution solely based on the image data.
In particular, we refine the mesh and locally condition the surface smoothness based on the estimated normal texture and mesh curvature.
In addition, we enable the reconstruction of fine appearance details in high-resolution textures through a pioneering tile-based method.
- Score: 3.5884936187733403
- License:
- Abstract: Reconstructing an object's shape and appearance in terms of a mesh textured by a spatially-varying bidirectional reflectance distribution function (SVBRDF) from a limited set of images captured under collocated light is an ill-posed problem. Previous state-of-the-art approaches either aim to reconstruct the appearance directly on the geometry or additionally use texture normals as part of the appearance features. However, this requires detailed but inefficiently large meshes, that would have to be simplified in a post-processing step, or suffers from well-known limitations of normal maps such as missing shadows or incorrect silhouettes. Another limiting factor is the fixed and typically low resolution of the texture estimation resulting in loss of important surface details. To overcome these problems, we present ROSA, an inverse rendering method that directly optimizes mesh geometry with spatially adaptive mesh resolution solely based on the image data. In particular, we refine the mesh and locally condition the surface smoothness based on the estimated normal texture and mesh curvature. In addition, we enable the reconstruction of fine appearance details in high-resolution textures through a pioneering tile-based method that operates on a single pre-trained decoder network but is not limited by the network output resolution.
Related papers
- EASI-Tex: Edge-Aware Mesh Texturing from Single Image [12.942796503696194]
We present a novel approach for single-image, which employs a diffusion model with conditioning to seamlessly transfer an object's texture to a given 3D mesh object.
We do not assume that the two objects belong to the same category, and even if they do, can be discrepancies in their proportions and part proportions.
arXiv Detail & Related papers (2024-05-27T17:46:22Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Pyramid Texture Filtering [86.15126028139736]
We present a simple but effective technique to smooth out textures while preserving the prominent structures.
Our method is built upon a key observation -- the coarsest level in a Gaussian pyramid often naturally eliminates textures and summarizes the main image structures.
We show that our approach is effective to separate structure from texture of different scales, local contrasts, and forms, without degrading structures or introducing visual artifacts.
arXiv Detail & Related papers (2023-05-11T02:05:30Z) - NeMF: Inverse Volume Rendering with Neural Microflake Field [30.15831015284247]
In this paper, we propose to conduct inverse volume rendering, in contrast to surface-based rendering.
We adopt coordinate networks to implicitly encode the microflake volume, and develop a differentiable microflake volume to train the network in an end-to-end way.
arXiv Detail & Related papers (2023-04-03T08:12:18Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - Deep Rectangling for Image Stitching: A Learning Baseline [57.76737888499145]
We build the first image stitching rectangling dataset with a large diversity in irregular boundaries and scenes.
Experiments demonstrate our superiority over traditional methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-03-08T03:34:10Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Z2P: Instant Rendering of Point Clouds [104.1186026323896]
We present a technique for rendering point clouds using a neural network.
Existing point rendering techniques either use splatting, or first reconstruct a surface mesh that can then be rendered.
arXiv Detail & Related papers (2021-05-30T13:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.