GarNet++: Improving Fast and Accurate Static3D Cloth Draping by
Curvature Loss
- URL: http://arxiv.org/abs/2007.10867v1
- Date: Mon, 20 Jul 2020 13:40:15 GMT
- Title: GarNet++: Improving Fast and Accurate Static3D Cloth Draping by
Curvature Loss
- Authors: Erhan Gundogdu, Victor Constantin, Shaifali Parashar, Amrollah
Seifoddini, Minh Dang, Mathieu Salzmann, and Pascal Fua
- Abstract summary: We introduce a two-stream deep network model that produces a visually plausible draping of a template cloth on virtual 3D bodies.
Our network learns to mimic a Physics-Based Simulation (PBS) method while requiring two orders of magnitude less computation time.
We validate our framework on four garment types for various body shapes and poses.
- Score: 89.96698250086064
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we tackle the problem of static 3D cloth draping on virtual
human bodies. We introduce a two-stream deep network model that produces a
visually plausible draping of a template cloth on virtual 3D bodies by
extracting features from both the body and garment shapes. Our network learns
to mimic a Physics-Based Simulation (PBS) method while requiring two orders of
magnitude less computation time. To train the network, we introduce loss terms
inspired by PBS to produce plausible results and make the model
collision-aware. To increase the details of the draped garment, we introduce
two loss functions that penalize the difference between the curvature of the
predicted cloth and PBS. Particularly, we study the impact of mean curvature
normal and a novel detail-preserving loss both qualitatively and
quantitatively. Our new curvature loss computes the local covariance matrices
of the 3D points, and compares the Rayleigh quotients of the prediction and
PBS. This leads to more details while performing favorably or comparably
against the loss that considers mean curvature normal vectors in the 3D
triangulated meshes. We validate our framework on four garment types for
various body shapes and poses. Finally, we achieve superior performance against
a recently proposed data-driven method.
Related papers
- Deep Loss Convexification for Learning Iterative Models [11.36644967267829]
Iterative methods such as iterative closest point (ICP) for point cloud registration often suffer from bad local optimality.
We propose learning to form a convex landscape around each ground truth.
arXiv Detail & Related papers (2024-11-16T01:13:04Z) - DM3D: Distortion-Minimized Weight Pruning for Lossless 3D Object Detection [42.07920565812081]
We propose a novel post-training weight pruning scheme for 3D object detection.
It determines redundant parameters in the pretrained model that lead to minimal distortion in both locality and confidence.
This framework aims to minimize detection distortion of network output to maximally maintain detection precision.
arXiv Detail & Related papers (2024-07-02T09:33:32Z) - TokenHMR: Advancing Human Mesh Recovery with a Tokenized Pose Representation [48.08156777874614]
Current methods leverage 3D pseudo-ground-truth (p-GT) and 2D keypoints, leading to robust performance.
With such methods, we observe a paradoxical decline in 3D pose accuracy with increasing 2D accuracy.
We quantify the error induced by current camera models and show that fitting 2D keypoints and p-GT accurately causes incorrect 3D poses.
arXiv Detail & Related papers (2024-04-25T17:09:14Z) - A Repulsive Force Unit for Garment Collision Handling in Neural Networks [61.34646212450137]
We propose a novel collision handling neural network layer called Repulsive Force Unit (ReFU)
Based on the signed distance function (SDF) of the underlying body, ReFU predicts the per-vertex offsets that push any interpenetrating to a collision-free configuration while preserving the fine geometric details.
Our experiments show that ReFU significantly reduces the number of collisions between the body and the garment and better preserves geometric details compared to prior methods.
arXiv Detail & Related papers (2022-07-28T03:46:16Z) - Learned Vertex Descent: A New Direction for 3D Human Model Fitting [64.04726230507258]
We propose a novel optimization-based paradigm for 3D human model fitting on images and scans.
Our approach is able to capture the underlying body of clothed people with very different body shapes, achieving a significant improvement compared to state-of-the-art.
LVD is also applicable to 3D model fitting of humans and hands, for which we show a significant improvement to the SOTA with a much simpler and faster method.
arXiv Detail & Related papers (2022-05-12T17:55:51Z) - Homography Loss for Monocular 3D Object Detection [54.04870007473932]
A differentiable loss function, termed as Homography Loss, is proposed to achieve the goal, which exploits both 2D and 3D information.
Our method yields the best performance compared with the other state-of-the-arts by a large margin on KITTI 3D datasets.
arXiv Detail & Related papers (2022-04-02T03:48:03Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape
and Garment Style [43.99803542307155]
We present TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style.
Our hypothesis is that (even non-linear) combinations of examples smooth out high frequency components such as fine-wrinkles.
Several experiments demonstrate TailorNet produces more realistic results than prior work, and even generates temporally coherent deformations.
arXiv Detail & Related papers (2020-03-10T08:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.