Mirror-3DGS: Incorporating Mirror Reflections into 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2404.01168v1
- Date: Mon, 1 Apr 2024 15:16:33 GMT
- Title: Mirror-3DGS: Incorporating Mirror Reflections into 3D Gaussian Splatting
- Authors: Jiarui Meng, Haijie Li, Yanmin Wu, Qiankun Gao, Shuzhou Yang, Jian Zhang, Siwei Ma,
- Abstract summary: Mirror-3DGS is an innovative rendering framework devised to master the intricacies of mirror geometries and reflections.
By incorporating mirror attributes into the 3DGS, Mirror-3DGS crafts a mirrored viewpoint to observe from behind the mirror, enriching the realism of scene renderings.
- Score: 27.361324194709155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) has marked a significant breakthrough in the realm of 3D scene reconstruction and novel view synthesis. However, 3DGS, much like its predecessor Neural Radiance Fields (NeRF), struggles to accurately model physical reflections, particularly in mirrors that are ubiquitous in real-world scenes. This oversight mistakenly perceives reflections as separate entities that physically exist, resulting in inaccurate reconstructions and inconsistent reflective properties across varied viewpoints. To address this pivotal challenge, we introduce Mirror-3DGS, an innovative rendering framework devised to master the intricacies of mirror geometries and reflections, paving the way for the generation of realistically depicted mirror reflections. By ingeniously incorporating mirror attributes into the 3DGS and leveraging the principle of plane mirror imaging, Mirror-3DGS crafts a mirrored viewpoint to observe from behind the mirror, enriching the realism of scene renderings. Extensive assessments, spanning both synthetic and real-world scenes, showcase our method's ability to render novel views with enhanced fidelity in real-time, surpassing the state-of-the-art Mirror-NeRF specifically within the challenging mirror regions. Our code will be made publicly available for reproducible research.
Related papers
- Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual Camera Optimization [14.324573496923792]
3D-GS often misinterprets reflections as virtual spaces, resulting in blurred and inconsistent multi-view rendering within mirrors.
Our paper presents a novel method aimed at obtaining high-quality multi-view consistent reflection rendering by modelling reflections as physically-based virtual cameras.
arXiv Detail & Related papers (2024-10-02T14:53:24Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - RefGaussian: Disentangling Reflections from 3D Gaussian Splatting for Realistic Rendering [18.427759763663047]
We propose RefGaussian to disentangle reflections from 3D-GS for realistically modeling reflections.
We employ local regularization techniques to ensure local smoothness for both the transmitted and reflected components.
Our approach achieves superior novel view synthesis and accurate depth estimation outcomes.
arXiv Detail & Related papers (2024-06-09T16:49:39Z) - MirrorGaussian: Reflecting 3D Gaussians for Reconstructing Mirror Reflections [58.003014868772254]
MirrorGaussian is the first method for mirror scene reconstruction with real-time rendering based on 3D Gaussian Splatting.
We introduce an intuitive dual-rendering strategy that enables differentiableization of both the real-world 3D Gaussians and the mirrored counterpart.
Our approach significantly outperforms existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-05-20T09:58:03Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - Revisiting Single Image Reflection Removal In the Wild [83.42368937164473]
This research focuses on the issue of single-image reflection removal (SIRR) in real-world conditions.
We devise an advanced reflection collection pipeline that is highly adaptable to a wide range of real-world reflection scenarios.
We develop a large-scale, high-quality reflection dataset named Reflection Removal in the Wild (RRW)
arXiv Detail & Related papers (2023-11-29T02:31:10Z) - Mirror-Aware Neural Humans [21.0548144424571]
We develop a consumer-level 3D motion capture system that starts from off-the-shelf 2D poses by automatically calibrating the camera.
We empirically demonstrate the benefit of learning a body model and accounting for occlusion in challenging mirror scenes.
arXiv Detail & Related papers (2023-09-09T10:43:45Z) - Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with
Whitted-Style Ray Tracing [33.852910220413655]
We present a novel neural rendering framework, named Mirror-NeRF, which is able to learn accurate geometry and reflection of the mirror.
Mirror-NeRF supports various scene manipulation applications with mirrors, such as adding new objects or mirrors into the scene and synthesizing the reflections of these new objects in mirrors.
arXiv Detail & Related papers (2023-08-07T03:48:07Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.