MatSwap: Light-aware material transfers in images
- URL: http://arxiv.org/abs/2502.07784v1
- Date: Tue, 11 Feb 2025 18:59:59 GMT
- Title: MatSwap: Light-aware material transfers in images
- Authors: Ivan Lopes, Valentin Deschaintre, Yannick Hold-Geoffroy, Raoul de Charette,
- Abstract summary: MatSwap is a method to transfer materials to designated surfaces in an image photorealistically.
We learn the relationship between the input material and its appearance within the scene, without the need for explicit UV mapping.
Our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene.
- Score: 18.37330769828654
- License:
- Abstract: We present MatSwap, a method to transfer materials to designated surfaces in an image photorealistically. Such a task is non-trivial due to the large entanglement of material appearance, geometry, and lighting in a photograph. In the literature, material editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge and 3D scene properties that are impractical to obtain. In contrast, we propose to directly learn the relationship between the input material -- as observed on a flat surface -- and its appearance within the scene, without the need for explicit UV mapping. To achieve this, we rely on a custom light- and geometry-aware diffusion model. We fine-tune a large-scale pre-trained text-to-image model for material transfer using our synthetic dataset, preserving its strong priors to ensure effective generalization to real images. As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene. We evaluate our method on synthetic and real images and show that it compares favorably to recent work both qualitatively and quantitatively. We will release our code and data upon publication.
Related papers
- Materialist: Physically Based Editing Using Single-Image Inverse Rendering [50.39048790589746]
We present a method combining a learning-based approach with progressive differentiable rendering.
Our method achieves more realistic light material interactions, accurate shadows, and global illumination.
We also propose a method for material transparency editing that operates effectively without requiring full scene geometry.
arXiv Detail & Related papers (2025-01-07T11:52:01Z) - MaPa: Text-driven Photorealistic Material Painting for 3D Shapes [80.66880375862628]
This paper aims to generate materials for 3D meshes from text descriptions.
Unlike existing methods that synthesize texture maps, we propose to generate segment-wise procedural material graphs.
Our framework supports high-quality rendering and provides substantial flexibility in editing.
arXiv Detail & Related papers (2024-04-26T17:54:38Z) - IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination [37.96484120807323]
This paper aims to recover object materials from posed images captured under an unknown static lighting condition.
We learn the material prior with a generative model for regularizing the optimization process.
Experiments on real-world and synthetic datasets demonstrate that our approach achieves state-of-the-art performance on material recovery.
arXiv Detail & Related papers (2024-04-17T17:45:08Z) - ZeST: Zero-Shot Material Transfer from a Single Image [59.714441587735614]
ZeST is a method for zero-shot material transfer to an object in the input image given a material exemplar image.
We show the application of ZeST to perform multiple edits and robust material assignment under different illuminations.
arXiv Detail & Related papers (2024-04-09T16:15:03Z) - Intrinsic Image Diffusion for Indoor Single-view Material Estimation [55.276815106443976]
We present Intrinsic Image Diffusion, a generative model for appearance decomposition of indoor scenes.
Given a single input view, we sample multiple possible material explanations represented as albedo, roughness, and metallic maps.
Our method produces significantly sharper, more consistent, and more detailed materials, outperforming state-of-the-art methods by $1.5dB$ on PSNR and by $45%$ better FID score on albedo prediction.
arXiv Detail & Related papers (2023-12-19T15:56:19Z) - Alchemist: Parametric Control of Material Properties with Diffusion
Models [51.63031820280475]
Our method capitalizes on the generative prior of text-to-image models known for photorealism.
We show the potential application of our model to material edited NeRFs.
arXiv Detail & Related papers (2023-12-05T18:58:26Z) - Material Palette: Extraction of Materials from a Single Image [19.410479434979493]
We propose a method to extract physically-based rendering (PBR) materials from a single real-world image.
We map regions of the image to material concepts using a diffusion model, which allows the sampling of texture images resembling each material in the scene.
Second, we benefit from a separate network to decompose the generated textures into Spatially Varying BRDFs.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - Materialistic: Selecting Similar Materials in Images [30.85562156542794]
We present a method capable of selecting the regions of a photograph exhibiting the same material as an artist-chosen area.
Our proposed approach is robust to shading, specular highlights, and cast shadows, enabling selection in real images.
We demonstrate our model on a set of applications, including material editing, in-video selection, and retrieval of object photographs with similar materials.
arXiv Detail & Related papers (2023-05-22T17:50:48Z) - Photo-to-Shape Material Transfer for Diverse Structures [15.816608726698986]
We introduce a method for assigning photorealistic relightable materials to 3D shapes in an automatic manner.
Our method combines an image translation neural network with a material assignment neural network.
We demonstrate that our method allows us to assign materials to shapes so that their appearances better resemble the input exemplars.
arXiv Detail & Related papers (2022-05-09T03:37:01Z) - Neural Photometry-guided Visual Attribute Transfer [4.630419389180576]
We present a deep learning-based method for propagating visual material attributes to larger samples of the same or similar materials.
For training, we leverage images of the material taken under multiple illuminations and a dedicated data augmentation policy.
Our model relies on a supervised image-to-image translation framework and is agnostic to the transferred domain.
arXiv Detail & Related papers (2021-12-05T09:22:28Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.