DragD3D: Vertex-based Editing for Realistic Mesh Deformations using 2D
Diffusion Priors
- URL: http://arxiv.org/abs/2310.04561v1
- Date: Fri, 6 Oct 2023 19:55:40 GMT
- Title: DragD3D: Vertex-based Editing for Realistic Mesh Deformations using 2D
Diffusion Priors
- Authors: Tianhao Xie, Eugene Belilovsky, Sudhir Mudur, Tiberiu Popa
- Abstract summary: DragD3D is a local mesh editing method for global context-aware realistic deformation.
We show that our deformations are realistic and aware of the global context of the objects, and provide better results than just using geometric regularizers.
- Score: 11.312715079259723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Direct mesh editing and deformation are key components in the geometric
modeling and animation pipeline. Direct mesh editing methods are typically
framed as optimization problems combining user-specified vertex constraints
with a regularizer that determines the position of the rest of the vertices.
The choice of the regularizer is key to the realism and authenticity of the
final result. Physics and geometry-based regularizers are not aware of the
global context and semantics of the object, and the more recent deep learning
priors are limited to a specific class of 3D object deformations. In this work,
our main contribution is a local mesh editing method called DragD3D for global
context-aware realistic deformation through direct manipulation of a few
vertices. DragD3D is not restricted to any class of objects. It achieves this
by combining the classic geometric ARAP (as rigid as possible) regularizer with
2D priors obtained from a large-scale diffusion model. Specifically, we render
the objects from multiple viewpoints through a differentiable renderer and use
the recently introduced DDS loss which scores the faithfulness of the rendered
image to one from a diffusion model. DragD3D combines the approximate gradients
of the DDS with gradients from the ARAP loss to modify the mesh vertices via
neural Jacobian field, while also satisfying vertex constraints. We show that
our deformations are realistic and aware of the global context of the objects,
and provide better results than just using geometric regularizers.
Related papers
- DragGaussian: Enabling Drag-style Manipulation on 3D Gaussian Representation [57.406031264184584]
DragGaussian is a 3D object drag-editing framework based on 3D Gaussian Splatting.
Our contributions include the introduction of a new task, the development of DragGaussian for interactive point-based 3D editing, and comprehensive validation of its effectiveness through qualitative and quantitative experiments.
arXiv Detail & Related papers (2024-05-09T14:34:05Z) - ShapeFusion: A 3D diffusion model for localized shape editing [37.82690898932135]
We propose an effective diffusion masking training strategy that, by design, facilitates localized manipulation of any shape region.
Compared to the current state-of-the-art our method leads to more interpretable shape manipulations than methods relying on latent code state.
arXiv Detail & Related papers (2024-03-28T18:50:19Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - 3Deformer: A Common Framework for Image-Guided Mesh Deformation [27.732389685912214]
Given a source 3D mesh with semantic materials, and a user-specified semantic image, 3Deformer can accurately edit the source mesh.
Our 3Deformer is able to produce impressive results and reaches the state-of-the-art level.
arXiv Detail & Related papers (2023-07-19T10:44:44Z) - TextDeformer: Geometry Manipulation using Text Guidance [37.02412892926677]
We present a technique for producing a deformation of an input triangle mesh guided solely by a text prompt.
Our framework relies on differentiable rendering to connect geometry to powerful pre-trained image encoders, such as CLIP and DINO.
To overcome this limitation, we opt to represent our mesh deformation through Jacobians, which updates deformations in a global, smooth manner.
arXiv Detail & Related papers (2023-04-26T07:38:41Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - Monocular 3D Object Reconstruction with GAN Inversion [122.96094885939146]
MeshInversion is a novel framework to improve the reconstruction of textured 3D meshes.
It exploits the generative prior of a 3D GAN pre-trained for 3D textured mesh synthesis.
Our framework obtains faithful 3D reconstructions with consistent geometry and texture across both observed and unobserved parts.
arXiv Detail & Related papers (2022-07-20T17:47:22Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Learning Free-Form Deformation for 3D Face Reconstruction from
In-The-Wild Images [19.799466588741836]
We propose a learning-based method that reconstructs a 3D face mesh through Free-Form Deformation (FFD) for the first time.
Experiments on multiple datasets demonstrate how our method successfully estimates the 3D face geometry and facial expressions from 2D face images.
arXiv Detail & Related papers (2021-05-31T10:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.