Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections
- URL: http://arxiv.org/abs/2409.14677v1
- Date: Mon, 23 Sep 2024 02:59:07 GMT
- Title: Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections
- Authors: Ankit Dhiman, Manan Shah, Rishubh Parihar, Yash Bhalgat, Lokesh R Boregowda, R Venkatesh Babu,
- Abstract summary: We tackle the problem of generating highly realistic and plausible mirror reflections using diffusion-based generative models.
To enable this, we create SynMirror, a large-scale dataset of diverse synthetic scenes with objects placed in front of mirrors.
We propose a novel depth-conditioned inpainting method called MirrorFusion, which generates high-quality geometrically consistent and photo-realistic mirror reflections.
- Score: 26.02117310176884
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We tackle the problem of generating highly realistic and plausible mirror reflections using diffusion-based generative models. We formulate this problem as an image inpainting task, allowing for more user control over the placement of mirrors during the generation process. To enable this, we create SynMirror, a large-scale dataset of diverse synthetic scenes with objects placed in front of mirrors. SynMirror contains around 198K samples rendered from 66K unique 3D objects, along with their associated depth maps, normal maps and instance-wise segmentation masks, to capture relevant geometric properties of the scene. Using this dataset, we propose a novel depth-conditioned inpainting method called MirrorFusion, which generates high-quality geometrically consistent and photo-realistic mirror reflections given an input image and a mask depicting the mirror region. MirrorFusion outperforms state-of-the-art methods on SynMirror, as demonstrated by extensive quantitative and qualitative analysis. To the best of our knowledge, we are the first to successfully tackle the challenging problem of generating controlled and faithful mirror reflections of an object in a scene using diffusion based models. SynMirror and MirrorFusion open up new avenues for image editing and augmented reality applications for practitioners and researchers alike.
Related papers
- Gaussian Splatting in Mirrors: Reflection-Aware Rendering via Virtual Camera Optimization [14.324573496923792]
3D-GS often misinterprets reflections as virtual spaces, resulting in blurred and inconsistent multi-view rendering within mirrors.
Our paper presents a novel method aimed at obtaining high-quality multi-view consistent reflection rendering by modelling reflections as physically-based virtual cameras.
arXiv Detail & Related papers (2024-10-02T14:53:24Z) - Multi-times Monte Carlo Rendering for Inter-reflection Reconstruction [51.911195773164245]
Inverse rendering methods have achieved remarkable performance in reconstructing high-fidelity 3D objects with disentangled geometries, materials, and environmental light.
We propose Ref-MC2 that introduces the multi-time Monte Carlo sampling which comprehensively computes the environmental illumination.
We also show downstream applications, e.g., relighting and material editing, to illustrate the disentanglement ability of our method.
arXiv Detail & Related papers (2024-07-08T09:27:34Z) - NeRSP: Neural 3D Reconstruction for Reflective Objects with Sparse Polarized Images [62.752710734332894]
NeRSP is a Neural 3D reconstruction technique for Reflective surfaces with Sparse Polarized images.
We derive photometric and geometric cues from the polarimetric image formation model and multiview azimuth consistency.
We achieve the state-of-the-art surface reconstruction results with only 6 views as input.
arXiv Detail & Related papers (2024-06-11T09:53:18Z) - MirrorGaussian: Reflecting 3D Gaussians for Reconstructing Mirror Reflections [58.003014868772254]
MirrorGaussian is the first method for mirror scene reconstruction with real-time rendering based on 3D Gaussian Splatting.
We introduce an intuitive dual-rendering strategy that enables differentiableization of both the real-world 3D Gaussians and the mirrored counterpart.
Our approach significantly outperforms existing methods, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-05-20T09:58:03Z) - Mirror-3DGS: Incorporating Mirror Reflections into 3D Gaussian Splatting [27.361324194709155]
Mirror-3DGS is an innovative rendering framework devised to master the intricacies of mirror geometries and reflections.
By incorporating mirror attributes into the 3DGS, Mirror-3DGS crafts a mirrored viewpoint to observe from behind the mirror, enriching the realism of scene renderings.
arXiv Detail & Related papers (2024-04-01T15:16:33Z) - Mirror-NeRF: Learning Neural Radiance Fields for Mirrors with
Whitted-Style Ray Tracing [33.852910220413655]
We present a novel neural rendering framework, named Mirror-NeRF, which is able to learn accurate geometry and reflection of the mirror.
Mirror-NeRF supports various scene manipulation applications with mirrors, such as adding new objects or mirrors into the scene and synthesizing the reflections of these new objects in mirrors.
arXiv Detail & Related papers (2023-08-07T03:48:07Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Symmetry-Aware Transformer-based Mirror Detection [85.47570468668955]
We propose a dual-path Symmetry-Aware Transformer-based mirror detection Network (SATNet)
SATNet includes two novel modules: Symmetry-Aware Attention Module (SAAM) and Contrast and Fusion Decoder Module (CFDM)
Experimental results show that SATNet outperforms both RGB and RGB-D mirror detection methods on all available mirror detection datasets.
arXiv Detail & Related papers (2022-07-13T16:40:01Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.