Geometry in Style: 3D Stylization via Surface Normal Deformation
- URL: http://arxiv.org/abs/2503.23241v2
- Date: Wed, 02 Apr 2025 18:56:38 GMT
- Title: Geometry in Style: 3D Stylization via Surface Normal Deformation
- Authors: Nam Anh Dinh, Itai Lang, Hyunwoo Kim, Oded Stein, Rana Hanocka,
- Abstract summary: We present Geometry in Style, a new method for identity-preserving mesh stylization.<n>Existing techniques either adhere to the original shape through overly restrictive deformations such as bump maps.<n>In contrast, we represent a deformation of a triangle mesh as a target normal vector.
- Score: 14.178630551656758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Geometry in Style, a new method for identity-preserving mesh stylization. Existing techniques either adhere to the original shape through overly restrictive deformations such as bump maps or significantly modify the input shape using expressive deformations that may introduce artifacts or alter the identity of the source shape. In contrast, we represent a deformation of a triangle mesh as a target normal vector for each vertex neighborhood. The deformations we recover from target normals are expressive enough to enable detailed stylizations yet restrictive enough to preserve the shape's identity. We achieve such deformations using our novel differentiable As-Rigid-As-Possible (dARAP) layer, a neural-network-ready adaptation of the classical ARAP algorithm which we use to solve for per-vertex rotations and deformed vertices. As a differentiable layer, dARAP is paired with a visual loss from a text-to-image model to drive deformations toward style prompts, altogether giving us Geometry in Style. Our project page is at https://threedle.github.io/geometry-in-style.
Related papers
- LeGO: Leveraging a Surface Deformation Network for Animatable Stylized Face Generation with One Example [5.999050119438177]
We propose a method that can produce a highly stylized 3D face model with desired topology.
Our methods train a surface deformation network with 3DMM and translate its domain to the target style using a differentiable meshes and directional CLIP losses.
The network achieves stylization of the 3D face mesh by mimicking the style of the target using a differentiable meshes and directional CLIP losses.
arXiv Detail & Related papers (2024-03-22T14:20:54Z) - Geometry Transfer for Stylizing Radiance Fields [54.771563955208705]
We introduce Geometry Transfer, a novel method that leverages geometric deformation for 3D style transfer.
Our experiments show that Geometry Transfer enables a broader and more expressive range of stylizations.
arXiv Detail & Related papers (2024-02-01T18:58:44Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - DragD3D: Realistic Mesh Editing with Rigidity Control Driven by 2D Diffusion Priors [10.355568895429588]
Direct mesh editing and deformation are key components in the geometric modeling and animation pipeline.
Regularizers are not aware of the global context and semantics of the object.
We show that our deformations can be controlled to yield realistic shape deformations aware of the global context.
arXiv Detail & Related papers (2023-10-06T19:55:40Z) - TextDeformer: Geometry Manipulation using Text Guidance [37.02412892926677]
We present a technique for producing a deformation of an input triangle mesh guided solely by a text prompt.
Our framework relies on differentiable rendering to connect geometry to powerful pre-trained image encoders, such as CLIP and DINO.
To overcome this limitation, we opt to represent our mesh deformation through Jacobians, which updates deformations in a global, smooth manner.
arXiv Detail & Related papers (2023-04-26T07:38:41Z) - RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline
Model and DoF-based Curriculum Learning [62.86400614141706]
We propose a new learning model, i.e., Rectangling Rectification Network (RecRecNet)
Our model can flexibly warp the source structure to the target domain and achieves an end-to-end unsupervised deformation.
Experiments show the superiority of our solution over the compared methods on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2023-01-04T15:12:57Z) - Neural Shape Deformation Priors [14.14047635248036]
We present Neural Shape Deformation Priors, a novel method for shape manipulation.
We learn the deformation behavior based on the underlying geometric properties of a shape.
Our method can be applied to challenging deformations and generalizes well to unseen deformations.
arXiv Detail & Related papers (2022-10-11T17:03:25Z) - Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape
Laplacian [58.704089101826774]
We present a 3D-aware image deformation method with minimal restrictions on shape category and deformation type.
We take a supervised learning-based approach to predict the shape Laplacian of the underlying volume of a 3D reconstruction represented as a point cloud.
In the experiments, we present our results of deforming 2D character and clothed human images.
arXiv Detail & Related papers (2022-03-29T04:57:18Z) - SNARF: Differentiable Forward Skinning for Animating Non-Rigid Neural
Implicit Shapes [117.76767853430243]
We introduce SNARF, which combines the advantages of linear blend skinning for polygonal meshes with neural implicit surfaces.
We propose a forward skinning model that finds all canonical correspondences of any deformed point using iterative root finding.
Compared to state-of-the-art neural implicit representations, our approach generalizes better to unseen poses while preserving accuracy.
arXiv Detail & Related papers (2021-04-08T17:54:59Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - ShapeFlow: Learnable Deformations Among 3D Shapes [28.854946339507123]
We present a flow-based model for learning a deformation space for entire classes of 3D shapes with large intra-class variations.
ShapeFlow allows learning a multi-template deformation space that is agnostic to shape topology, yet preserves fine geometric details.
arXiv Detail & Related papers (2020-06-14T19:03:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.