Deep Deformation Detail Synthesis for Thin Shell Models
- URL: http://arxiv.org/abs/2102.11541v1
- Date: Tue, 23 Feb 2021 08:09:11 GMT
- Title: Deep Deformation Detail Synthesis for Thin Shell Models
- Authors: Lan Chen, Lin Gao, Jie Yang, Shibiao Xu, Juntao Ye, Xiaopeng Zhang,
Yu-Kun Lai
- Abstract summary: In physics-based cloth animation, rich folds and detailed wrinkles are achieved at the cost of expensive computational resources and huge labor tuning.
We develop a temporally and spatially as-consistent-as-possible deformation representation (named TS-ACAP) and a DeformTransformer network to learn the mapping from low-resolution meshes to detailed ones.
Our method is able to produce reliable and realistic animations in various datasets at high frame rates: 10 35 times faster than physics-based simulation, with superior detail synthesis abilities than existing methods.
- Score: 47.442883859643004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In physics-based cloth animation, rich folds and detailed wrinkles are
achieved at the cost of expensive computational resources and huge labor
tuning. Data-driven techniques make efforts to reduce the computation
significantly by a database. One type of methods relies on human poses to
synthesize fitted garments which cannot be applied to general cloth. Another
type of methods adds details to the coarse meshes without such restrictions.
However, existing works usually utilize coordinate-based representations which
cannot cope with large-scale deformation, and requires dense vertex
correspondences between coarse and fine meshes. Moreover, as such methods only
add details, they require coarse meshes to be close to fine meshes, which can
be either impossible, or require unrealistic constraints when generating fine
meshes. To address these challenges, we develop a temporally and spatially
as-consistent-as-possible deformation representation (named TS-ACAP) and a
DeformTransformer network to learn the mapping from low-resolution meshes to
detailed ones. This TS-ACAP representation is designed to ensure both spatial
and temporal consistency for sequential large-scale deformations from cloth
animations. With this representation, our DeformTransformer network first
utilizes two mesh-based encoders to extract the coarse and fine features,
respectively. To transduct the coarse features to the fine ones, we leverage
the Transformer network that consists of frame-level attention mechanisms to
ensure temporal coherence of the prediction. Experimental results show that our
method is able to produce reliable and realistic animations in various datasets
at high frame rates: 10 ~ 35 times faster than physics-based simulation, with
superior detail synthesis abilities than existing methods.
Related papers
- Ultron: Enabling Temporal Geometry Compression of 3D Mesh Sequences using Temporal Correspondence and Mesh Deformation [2.0914328542137346]
Existing 3D model compression methods primarily focus on static models and do not consider inter-frame information.
This paper proposes a method to compress mesh sequences with arbitrary topology using temporal correspondence and mesh deformation.
arXiv Detail & Related papers (2024-09-08T16:34:19Z) - Shape Conditioned Human Motion Generation with Diffusion Model [0.0]
We propose a Shape-conditioned Motion Diffusion model (SMD), which enables the generation of motion sequences directly in mesh format.
We also propose a Spectral-Temporal Autoencoder (STAE) to leverage cross-temporal dependencies within the spectral domain.
arXiv Detail & Related papers (2024-05-10T19:06:41Z) - Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based
View Synthesis [70.40950409274312]
We modify density fields to encourage them to converge towards surfaces, without compromising their ability to reconstruct thin structures.
We also develop a fusion-based meshing strategy followed by mesh simplification and appearance model fitting.
The compact meshes produced by our model can be rendered in real-time on mobile devices.
arXiv Detail & Related papers (2024-02-19T18:59:41Z) - DeepFracture: A Generative Approach for Predicting Brittle Fractures [2.7669937245634757]
This paper introduces a novel learning-based approach for seamlessly merging realistic brittle fracture animations with rigid-body simulations.
Our method utilizes BEM brittle fracture simulations to create fractured patterns and collision conditions for a given shape.
Our experimental results demonstrate that our approach can generate significantly more detailed brittle fractures compared to existing techniques.
arXiv Detail & Related papers (2023-10-20T08:15:13Z) - Dynamic Frame Interpolation in Wavelet Domain [57.25341639095404]
Video frame is an important low-level computation vision task, which can increase frame rate for more fluent visual experience.
Existing methods have achieved great success by employing advanced motion models and synthesis networks.
WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts.
arXiv Detail & Related papers (2023-09-07T06:41:15Z) - Compressible-composable NeRF via Rank-residual Decomposition [21.92736190195887]
Neural Radiance Field (NeRF) has emerged as a compelling method to represent 3D objects and scenes for photo-realistic rendering.
We present a neural representation that enables efficient and convenient manipulation of models.
Our method is able to achieve comparable rendering quality to state-of-the-art methods, while enabling extra capability of compression and composition.
arXiv Detail & Related papers (2022-05-30T06:18:59Z) - Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion
Networks [63.596602299263935]
We present a learning algorithm that uses bone-driven motion networks to predict the deformation of loose-fitting garment meshes at interactive rates.
We show that our method outperforms state-of-the-art methods in terms of prediction accuracy of mesh deformations by about 20% in RMSE and 10% in Hausdorff distance and STED.
arXiv Detail & Related papers (2022-05-03T07:54:39Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - Perceptron Synthesis Network: Rethinking the Action Scale Variances in
Videos [48.57686258913474]
Video action recognition has been partially addressed by the CNNs stacking of fixed-size 3D kernels.
We propose to learn the optimal-scale kernels from the data.
An textitaction perceptron synthesizer is proposed to generate the kernels from a bag of fixed-size kernels.
arXiv Detail & Related papers (2020-07-22T14:22:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.