3D Paintbrush: Local Stylization of 3D Shapes with Cascaded Score
Distillation
- URL: http://arxiv.org/abs/2311.09571v1
- Date: Thu, 16 Nov 2023 05:13:44 GMT
- Title: 3D Paintbrush: Local Stylization of 3D Shapes with Cascaded Score
Distillation
- Authors: Dale Decatur, Itai Lang, Kfir Aberman, Rana Hanocka
- Abstract summary: 3D Paintbrush is a technique for automatically local semantic regions on meshes via text descriptions.
Our method is designed to operate directly on meshes, producing texture maps seamlessly.
- Score: 21.703142822709466
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we develop 3D Paintbrush, a technique for automatically
texturing local semantic regions on meshes via text descriptions. Our method is
designed to operate directly on meshes, producing texture maps which seamlessly
integrate into standard graphics pipelines. We opt to simultaneously produce a
localization map (to specify the edit region) and a texture map which conforms
to it. This synergistic approach improves the quality of both the localization
and the stylization. To enhance the details and resolution of the textured
area, we leverage multiple stages of a cascaded diffusion model to supervise
our local editing technique with generative priors learned from images at
different resolutions. Our technique, referred to as Cascaded Score
Distillation (CSD), simultaneously distills scores at multiple resolutions in a
cascaded fashion, enabling control over both the granularity and global
understanding of the supervision. We demonstrate the effectiveness of 3D
Paintbrush to locally texture a variety of shapes within different semantic
regions. Project page: https://threedle.github.io/3d-paintbrush
Related papers
- DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement [38.719572669042925]
We present a 3D modeling method which enables end-users to refine or detailize 3D shapes using machine learning.
We show that our ability to localize details enables novel interactive creative and applications.
arXiv Detail & Related papers (2024-09-10T00:51:49Z) - ShapeFusion: A 3D diffusion model for localized shape editing [37.82690898932135]
We propose an effective diffusion masking training strategy that, by design, facilitates localized manipulation of any shape region.
Compared to the current state-of-the-art our method leads to more interpretable shape manipulations than methods relying on latent code state.
arXiv Detail & Related papers (2024-03-28T18:50:19Z) - DragTex: Generative Point-Based Texture Editing on 3D Mesh [11.163205302136625]
We propose a generative point-based 3D mesh texture editing method called DragTex.
This method utilizes a diffusion model to blend locally inconsistent textures in the region near the deformed silhouette between different views.
We train LoRA using multi-view images instead of training each view individually, which significantly shortens the training time.
arXiv Detail & Related papers (2024-03-04T17:05:01Z) - 3DStyle-Diffusion: Pursuing Fine-grained Text-driven 3D Stylization with
2D Diffusion Models [102.75875255071246]
3D content creation via text-driven stylization has played a fundamental challenge to multimedia and graphics community.
We propose a new 3DStyle-Diffusion model that triggers fine-grained stylization of 3D meshes with additional controllable appearance and geometric guidance from 2D Diffusion models.
arXiv Detail & Related papers (2023-11-09T15:51:27Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - Blocks2World: Controlling Realistic Scenes with Editable Primitives [5.541644538483947]
We present Blocks2World, a novel method for 3D scene rendering and editing.
Our technique begins by extracting 3D parallelepipeds from various objects in a given scene using convex decomposition.
The next stage involves training a conditioned model that learns to generate images from the 2D-rendered convex primitives.
arXiv Detail & Related papers (2023-07-07T21:38:50Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering [54.35405028643051]
We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
arXiv Detail & Related papers (2023-03-27T10:07:52Z) - SKED: Sketch-guided Text-based 3D Editing [49.019881133348775]
We present SKED, a technique for editing 3D shapes represented by NeRFs.
Our technique utilizes as few as two guiding sketches from different views to alter an existing neural field.
We propose novel loss functions to generate the desired edits while preserving the density and radiance of the base instance.
arXiv Detail & Related papers (2023-03-19T18:40:44Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Semi-supervised Synthesis of High-Resolution Editable Textures for 3D
Humans [14.098628848491147]
We introduce a novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi-supervised setup.
Given a segmentation mask defining the layout of the semantic regions in the texture map, our network generates high-resolution textures with a variety of styles, that are then used for rendering purposes.
arXiv Detail & Related papers (2021-03-31T17:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.