DragD3D: Vertex-based Editing for Realistic Mesh Deformations using 2D
Diffusion Priors
- URL: http://arxiv.org/abs/2310.04561v1
- Date: Fri, 6 Oct 2023 19:55:40 GMT
- Title: DragD3D: Vertex-based Editing for Realistic Mesh Deformations using 2D
Diffusion Priors
- Authors: Tianhao Xie, Eugene Belilovsky, Sudhir Mudur, Tiberiu Popa
- Abstract summary: DragD3D is a local mesh editing method for global context-aware realistic deformation.
We show that our deformations are realistic and aware of the global context of the objects, and provide better results than just using geometric regularizers.
- Score: 11.312715079259723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Direct mesh editing and deformation are key components in the geometric
modeling and animation pipeline. Direct mesh editing methods are typically
framed as optimization problems combining user-specified vertex constraints
with a regularizer that determines the position of the rest of the vertices.
The choice of the regularizer is key to the realism and authenticity of the
final result. Physics and geometry-based regularizers are not aware of the
global context and semantics of the object, and the more recent deep learning
priors are limited to a specific class of 3D object deformations. In this work,
our main contribution is a local mesh editing method called DragD3D for global
context-aware realistic deformation through direct manipulation of a few
vertices. DragD3D is not restricted to any class of objects. It achieves this
by combining the classic geometric ARAP (as rigid as possible) regularizer with
2D priors obtained from a large-scale diffusion model. Specifically, we render
the objects from multiple viewpoints through a differentiable renderer and use
the recently introduced DDS loss which scores the faithfulness of the rendered
image to one from a diffusion model. DragD3D combines the approximate gradients
of the DDS with gradients from the ARAP loss to modify the mesh vertices via
neural Jacobian field, while also satisfying vertex constraints. We show that
our deformations are realistic and aware of the global context of the objects,
and provide better results than just using geometric regularizers.
Related papers
- WIR3D: Visually-Informed and Geometry-Aware 3D Shape Abstraction [13.645442589551354]
WIR3D is a technique for abstracting 3D shapes through a sparse set of visually meaningful curves in 3D.<n>We optimize the parameters of Bezier curves such that they faithfully represent both the geometry and salient visual features.<n>We successfully apply our method for shape abstraction over a broad dataset of shapes.
arXiv Detail & Related papers (2025-05-07T21:28:05Z) - 3D Gaussian Editing with A Single Image [19.662680524312027]
We introduce a novel single-image-driven 3D scene editing approach based on 3D Gaussian Splatting.
Our method learns to optimize the 3D Gaussians to align with an edited version of the image rendered from a user-specified viewpoint.
Experiments show the effectiveness of our method in handling geometric details, long-range, and non-rigid deformation.
arXiv Detail & Related papers (2024-08-14T13:17:42Z) - 3D Geometry-aware Deformable Gaussian Splatting for Dynamic View Synthesis [49.352765055181436]
We propose a 3D geometry-aware deformable Gaussian Splatting method for dynamic view synthesis.
Our solution achieves 3D geometry-aware deformation modeling, which enables improved dynamic view synthesis and 3D dynamic reconstruction.
arXiv Detail & Related papers (2024-04-09T12:47:30Z) - ShapeFusion: A 3D diffusion model for localized shape editing [37.82690898932135]
We propose an effective diffusion masking training strategy that, by design, facilitates localized manipulation of any shape region.
Compared to the current state-of-the-art our method leads to more interpretable shape manipulations than methods relying on latent code state.
arXiv Detail & Related papers (2024-03-28T18:50:19Z) - Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - 3Deformer: A Common Framework for Image-Guided Mesh Deformation [27.732389685912214]
Given a source 3D mesh with semantic materials, and a user-specified semantic image, 3Deformer can accurately edit the source mesh.
Our 3Deformer is able to produce impressive results and reaches the state-of-the-art level.
arXiv Detail & Related papers (2023-07-19T10:44:44Z) - TextDeformer: Geometry Manipulation using Text Guidance [37.02412892926677]
We present a technique for producing a deformation of an input triangle mesh guided solely by a text prompt.
Our framework relies on differentiable rendering to connect geometry to powerful pre-trained image encoders, such as CLIP and DINO.
To overcome this limitation, we opt to represent our mesh deformation through Jacobians, which updates deformations in a global, smooth manner.
arXiv Detail & Related papers (2023-04-26T07:38:41Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - Geometry-Contrastive Transformer for Generalized 3D Pose Transfer [95.56457218144983]
The intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism.
We propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies.
We present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task.
arXiv Detail & Related papers (2021-12-14T13:14:24Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.