Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual Camera Optimization
- URL: http://arxiv.org/abs/2410.01614v1
- Date: Wed, 2 Oct 2024 14:53:24 GMT
- Title: Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual Camera Optimization
- Authors: Zihan Wang, Shuzhe Wang, Matias Turkulainen, Junyuan Fang, Juho Kannala,
- Abstract summary: 3D-GS often misinterprets reflections as virtual spaces, resulting in blurred and inconsistent multi-view rendering within mirrors.
Our paper presents a novel method aimed at obtaining high-quality multi-view consistent reflection rendering by modelling reflections as physically-based virtual cameras.
- Score: 14.324573496923792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in 3D Gaussian Splatting (3D-GS) have revolutionized novel view synthesis, facilitating real-time, high-quality image rendering. However, in scenarios involving reflective surfaces, particularly mirrors, 3D-GS often misinterprets reflections as virtual spaces, resulting in blurred and inconsistent multi-view rendering within mirrors. Our paper presents a novel method aimed at obtaining high-quality multi-view consistent reflection rendering by modelling reflections as physically-based virtual cameras. We estimate mirror planes with depth and normal estimates from 3D-GS and define virtual cameras that are placed symmetrically about the mirror plane. These virtual cameras are then used to explain mirror reflections in the scene. To address imperfections in mirror plane estimates, we propose a straightforward yet effective virtual camera optimization method to enhance reflection quality. We collect a new mirror dataset including three real-world scenarios for more diverse evaluation. Experimental validation on both Mirror-Nerf and our real-world dataset demonstrate the efficacy of our approach. We achieve comparable or superior results while significantly reducing training time compared to previous state-of-the-art.
Related papers
- Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections [26.02117310176884]
We tackle the problem of generating highly realistic and plausible mirror reflections using diffusion-based generative models.
To enable this, we create SynMirror, a large-scale dataset of diverse synthetic scenes with objects placed in front of mirrors.
We propose a novel depth-conditioned inpainting method called MirrorFusion, which generates high-quality geometrically consistent and photo-realistic mirror reflections.
arXiv Detail & Related papers (2024-09-23T02:59:07Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - MirrorGaussian: Reflecting 3D Gaussians for Reconstructing Mirror Reflections [58.003014868772254]
MirrorGaussian is the first method for mirror scene reconstruction with real-time rendering based on 3D Gaussian Splatting.
We introduce an intuitive dual-rendering strategy that enables differentiableization of both the real-world 3D Gaussians and the mirrored counterpart.
Our approach significantly outperforms existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-05-20T09:58:03Z) - Mirror-3DGS: Incorporating Mirror Reflections into 3D Gaussian Splatting [27.361324194709155]
Mirror-3DGS is an innovative rendering framework devised to master the intricacies of mirror geometries and reflections.
By incorporating mirror attributes into the 3DGS, Mirror-3DGS crafts a mirrored viewpoint to observe from behind the mirror, enriching the realism of scene renderings.
arXiv Detail & Related papers (2024-04-01T15:16:33Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - Revisiting Single Image Reflection Removal In the Wild [83.42368937164473]
This research focuses on the issue of single-image reflection removal (SIRR) in real-world conditions.
We devise an advanced reflection collection pipeline that is highly adaptable to a wide range of real-world reflection scenarios.
We develop a large-scale, high-quality reflection dataset named Reflection Removal in the Wild (RRW)
arXiv Detail & Related papers (2023-11-29T02:31:10Z) - Mirror-Aware Neural Humans [21.0548144424571]
We develop a consumer-level 3D motion capture system that starts from off-the-shelf 2D poses by automatically calibrating the camera.
We empirically demonstrate the benefit of learning a body model and accounting for occlusion in challenging mirror scenes.
arXiv Detail & Related papers (2023-09-09T10:43:45Z) - Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with
Whitted-Style Ray Tracing [33.852910220413655]
We present a novel neural rendering framework, named Mirror-NeRF, which is able to learn accurate geometry and reflection of the mirror.
Mirror-NeRF supports various scene manipulation applications with mirrors, such as adding new objects or mirrors into the scene and synthesizing the reflections of these new objects in mirrors.
arXiv Detail & Related papers (2023-08-07T03:48:07Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.