Refracting Reality: Generating Images with Realistic Transparent Objects
- URL: http://arxiv.org/abs/2511.17340v1
- Date: Fri, 21 Nov 2025 16:02:44 GMT
- Title: Refracting Reality: Generating Images with Realistic Transparent Objects
- Authors: Yue Yin, Enze Tao, Dylan Campbell,
- Abstract summary: We consider the problem of generating images with accurate refraction, given a text prompt.<n>We synchronize the pixels within the object's boundary with those outside by warping and merging the pixels.<n>For surfaces that are not directly observed in the image, but are visible via refraction or reflection, we recover their appearance by synchronizing the image with a second generated image.
- Score: 21.254951751906383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative image models can produce convincingly real images, with plausible shapes, textures, layouts and lighting. However, one domain in which they perform notably poorly is in the synthesis of transparent objects, which exhibit refraction, reflection, absorption and scattering. Refraction is a particular challenge, because refracted pixel rays often intersect with surfaces observed in other parts of the image, providing a constraint on the color. It is clear from inspection that generative models have not distilled the laws of optics sufficiently well to accurately render refractive objects. In this work, we consider the problem of generating images with accurate refraction, given a text prompt. We synchronize the pixels within the object's boundary with those outside by warping and merging the pixels using Snell's Law of Refraction, at each step of the generation trajectory. For those surfaces that are not directly observed in the image, but are visible via refraction or reflection, we recover their appearance by synchronizing the image with a second generated image -- a panorama centered at the object -- using the same warping and merging procedure. We demonstrate that our approach generates much more optically-plausible images that respect the physical constraints.
Related papers
- DiffTrans: Differentiable Geometry-Materials Decomposition for Reconstructing Transparent Objects [53.83670041249326]
Reconstructing transparent objects from a set of multi-view images is a challenging task due to the complicated nature and indeterminate behavior of light propagation.<n>We propose a differentiable rendering framework for transparent objects, dubbed DiffTrans, which allows for efficient decomposition and reconstruction of the geometry and materials of transparent objects.
arXiv Detail & Related papers (2026-02-28T02:21:31Z) - UnReflectAnything: RGB-Only Highlight Removal by Rendering Synthetic Specular Supervision [51.72020507506023]
We present UnReflectAnything, an RGB-only framework that removes highlights from a single image.<n>It predicts a highlight map together with a reflection-free diffuse reconstruction.<n>It generalizes across natural and surgical domains where non-Lambertian surfaces and non-uniform lighting create severe highlights.
arXiv Detail & Related papers (2025-12-10T12:22:37Z) - LaRender: Training-Free Occlusion Control in Image Generation via Latent Rendering [10.476519949850118]
We propose a novel training-free image generation algorithm that precisely controls the occlusion relationships between objects in an image.<n>We demonstrate that our method can achieve a variety of effects, such as altering the transparency of objects, the density of mass, and the intensity of light.
arXiv Detail & Related papers (2025-08-11T05:57:59Z) - Photorealistic Object Insertion with Diffusion-Guided Inverse Rendering [56.68286440268329]
correct insertion of virtual objects in images of real-world scenes requires a deep understanding of the scene's lighting, geometry and materials.
We propose using a personalized large diffusion model as guidance to a physically based inverse rendering process.
Our method recovers scene lighting and tone-mapping parameters, allowing the photorealistic composition of arbitrary virtual objects in single frames or videos of indoor or outdoor scenes.
arXiv Detail & Related papers (2024-08-19T05:15:45Z) - Curved Diffusion: A Generative Model With Optical Geometry Control [56.24220665691974]
The influence of different optical systems on the final scene appearance is frequently overlooked.
This study introduces a framework that intimately integrates a textto-image diffusion model with the particular lens used in image rendering.
arXiv Detail & Related papers (2023-11-29T13:06:48Z) - Towards Monocular Shape from Refraction [23.60349429048409]
We show that a simple energy function based on Snell's law enables the reconstruction of an arbitrary refractive surface geometry.
We show that solving for an entire surface at once introduces implicit parameter-free spatial regularization.
arXiv Detail & Related papers (2023-05-31T11:09:37Z) - Relightify: Relightable 3D Faces from a Single Image via Diffusion
Models [86.3927548091627]
We present the first approach to use diffusion models as a prior for highly accurate 3D facial BRDF reconstruction from a single image.
In contrast to existing methods, we directly acquire the observed texture from the input image, thus, resulting in more faithful and consistent estimation.
arXiv Detail & Related papers (2023-05-10T11:57:49Z) - Seeing Through the Glass: Neural 3D Reconstruction of Object Inside a
Transparent Container [61.50401406132946]
Transparent enclosures pose challenges of multiple light reflections and refractions at the interface between different propagation media.
We use an existing neural reconstruction method (NeuS) that implicitly represents the geometry and appearance of the inner subspace.
In order to account for complex light interactions, we develop a hybrid rendering strategy that combines volume rendering with ray tracing.
arXiv Detail & Related papers (2023-03-24T04:58:27Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Dense Reconstruction of Transparent Objects by Altering Incident Light
Paths Through Refraction [40.696591594772876]
We introduce a fixed viewpoint approach to dense surface reconstruction of transparent objects based on refraction of light.
We present a setup that allows us to alter the incident light paths before light rays enter the object by immersing the object partially in a liquid.
arXiv Detail & Related papers (2021-05-20T19:01:12Z) - Refractive Light-Field Features for Curved Transparent Objects in
Structure from Motion [10.380414189465345]
We propose a novel image feature for light fields that detects and describes the patterns of light refracted through curved transparent objects.
We demonstrate improved structure-from-motion performance in challenging scenes containing refractive objects.
Our method is a critical step towards allowing robots to operate around refractive objects.
arXiv Detail & Related papers (2021-03-29T05:55:32Z) - Shape, Illumination, and Reflectance from Shading [86.71603503678216]
A fundamental problem in computer vision is that of inferring the intrinsic, 3D structure of the world from flat, 2D images.
We find that certain explanations are more likely than others: surfaces tend to be smooth, paint tends to be uniform, and illumination tends to be natural.
Our technique can be viewed as a superset of several classic computer vision problems.
arXiv Detail & Related papers (2020-10-07T18:14:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.