MARBLE: Material Recomposition and Blending in CLIP-Space
- URL: http://arxiv.org/abs/2506.05313v1
- Date: Thu, 05 Jun 2025 17:55:16 GMT
- Title: MARBLE: Material Recomposition and Blending in CLIP-Space
- Authors: Ta-Ying Cheng, Prafull Sharma, Mark Boss, Varun Jampani,
- Abstract summary: We propose a method for performing material blending and recomposing fine-grained material properties by finding material embeddings in CLIP-space.<n>We improve exemplar-based material editing by finding a block in the denoising UNet responsible for material attribution.
- Score: 34.22278569839714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Editing materials of objects in images based on exemplar images is an active area of research in computer vision and graphics. We propose MARBLE, a method for performing material blending and recomposing fine-grained material properties by finding material embeddings in CLIP-space and using that to control pre-trained text-to-image models. We improve exemplar-based material editing by finding a block in the denoising UNet responsible for material attribution. Given two material exemplar-images, we find directions in the CLIP-space for blending the materials. Further, we can achieve parametric control over fine-grained material attributes such as roughness, metallic, transparency, and glow using a shallow network to predict the direction for the desired material attribute change. We perform qualitative and quantitative analysis to demonstrate the efficacy of our proposed method. We also present the ability of our method to perform multiple edits in a single forward pass and applicability to painting. Project Page: https://marblecontrol.github.io/
Related papers
- MatSwap: Light-aware material transfers in images [18.37330769828654]
MatSwap is a method to transfer materials to designated surfaces in an image photorealistically.<n>We learn the relationship between the input material and its appearance within the scene, without the need for explicit UV mapping.<n>Our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene.
arXiv Detail & Related papers (2025-02-11T18:59:59Z) - Materialist: Physically Based Editing Using Single-Image Inverse Rendering [50.39048790589746]
We present a method combining a learning-based approach with progressive differentiable rendering.<n>Our method achieves more realistic light material interactions, accurate shadows, and global illumination.<n>We also propose a method for material transparency editing that operates effectively without requiring full scene geometry.
arXiv Detail & Related papers (2025-01-07T11:52:01Z) - MaPa: Text-driven Photorealistic Material Painting for 3D Shapes [79.13775179541311]
This paper aims to generate materials for 3D meshes from text descriptions.<n>Unlike existing methods that synthesize texture maps, we propose to generate segment-wise procedural material graphs.<n>Our framework supports high-quality rendering and provides substantial flexibility in editing.
arXiv Detail & Related papers (2024-04-26T17:54:38Z) - MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets [63.284244910964475]
We propose a 3D asset material generation framework to infer underlying material from the 2D semantic prior.
Based on such a prior model, we devise a mechanism to parse material in 3D space.
arXiv Detail & Related papers (2024-04-22T07:00:17Z) - Intrinsic Image Diffusion for Indoor Single-view Material Estimation [55.276815106443976]
We present Intrinsic Image Diffusion, a generative model for appearance decomposition of indoor scenes.
Given a single input view, we sample multiple possible material explanations represented as albedo, roughness, and metallic maps.
Our method produces significantly sharper, more consistent, and more detailed materials, outperforming state-of-the-art methods by $1.5dB$ on PSNR and by $45%$ better FID score on albedo prediction.
arXiv Detail & Related papers (2023-12-19T15:56:19Z) - Alchemist: Parametric Control of Material Properties with Diffusion
Models [51.63031820280475]
Our method capitalizes on the generative prior of text-to-image models known for photorealism.
We show the potential application of our model to material edited NeRFs.
arXiv Detail & Related papers (2023-12-05T18:58:26Z) - MatFuse: Controllable Material Generation with Diffusion Models [10.993516790237503]
MatFuse is a unified approach that harnesses the generative power of diffusion models for creation and editing of 3D materials.
Our method integrates multiple sources of conditioning, including color palettes, sketches, text, and pictures, enhancing creative possibilities.
We demonstrate the effectiveness of MatFuse under multiple conditioning settings and explore the potential of material editing.
arXiv Detail & Related papers (2023-08-22T12:54:48Z) - One-shot recognition of any material anywhere using contrastive learning
with physics-based rendering [0.0]
We present MatSim: a synthetic dataset, a benchmark, and a method for computer vision based recognition of similarities and transitions between materials and textures.
The visual recognition of materials is essential to everything from examining food while cooking to inspecting agriculture, chemistry, and industrial products.
arXiv Detail & Related papers (2022-12-01T16:49:53Z) - MaterialGAN: Reflectance Capture using a Generative SVBRDF Model [33.578080406338266]
We present MaterialGAN, a deep generative convolutional network based on StyleGAN2.
We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework.
We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone.
arXiv Detail & Related papers (2020-09-30T21:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.