How Will It Drape Like? Capturing Fabric Mechanics from Depth Images
- URL: http://arxiv.org/abs/2304.06704v1
- Date: Thu, 13 Apr 2023 17:54:08 GMT
- Title: How Will It Drape Like? Capturing Fabric Mechanics from Depth Images
- Authors: Carlos Rodriguez-Pardo, Melania Prieto-Martin, Dan Casas, Elena Garces
- Abstract summary: We propose a method to estimate the mechanical parameters of fabrics using a casual capture setup with a depth camera.
Our approach enables to create mechanically-correct digital representations of real-world textile materials.
- Score: 7.859729554664895
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a method to estimate the mechanical parameters of fabrics using a
casual capture setup with a depth camera. Our approach enables to create
mechanically-correct digital representations of real-world textile materials,
which is a fundamental step for many interactive design and engineering
applications. As opposed to existing capture methods, which typically require
expensive setups, video sequences, or manual intervention, our solution can
capture at scale, is agnostic to the optical appearance of the textile, and
facilitates fabric arrangement by non-expert operators. To this end, we propose
a sim-to-real strategy to train a learning-based framework that can take as
input one or multiple images and outputs a full set of mechanical parameters.
Thanks to carefully designed data augmentation and transfer learning protocols,
our solution generalizes to real images despite being trained only on synthetic
data, hence successfully closing the sim-to-real loop.Key in our work is to
demonstrate that evaluating the regression accuracy based on the similarity at
parameter space leads to an inaccurate distances that do not match the human
perception. To overcome this, we propose a novel metric for fabric drape
similarity that operates on the image domain instead on the parameter space,
allowing us to evaluate our estimation within the context of a similarity rank.
We show that out metric correlates with human judgments about the perception of
drape similarity, and that our model predictions produce perceptually accurate
results compared to the ground truth parameters.
Related papers
- Single-image camera calibration with model-free distortion correction [0.0]
This paper proposes a method for estimating the complete set of calibration parameters from a single image of a planar speckle pattern covering the entire sensor.
The correspondence between image points and physical points on the calibration target is obtained using Digital Image Correlation.
At the end of the procedure, a dense and uniform model-free distortion map is obtained over the entire image.
arXiv Detail & Related papers (2024-03-02T16:51:35Z) - Learning Robust Multi-Scale Representation for Neural Radiance Fields
from Unposed Images [65.41966114373373]
We present an improved solution to the neural image-based rendering problem in computer vision.
The proposed approach could synthesize a realistic image of the scene from a novel viewpoint at test time.
arXiv Detail & Related papers (2023-11-08T08:18:23Z) - CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle
Components [77.33782775860028]
We introduce CarPatch, a novel synthetic benchmark of vehicles.
In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view.
Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques.
arXiv Detail & Related papers (2023-07-24T11:59:07Z) - Neural inverse procedural modeling of knitting yarns from images [6.114281140793954]
We show that the complexity of yarn structures can be better encountered in terms of ensembles of networks that focus on individual characteristics.
We demonstrate that the combination of a carefully designed parametric, procedural yarn model with respective network ensembles as well as loss functions even allows robust parameter inference.
arXiv Detail & Related papers (2023-03-01T00:56:39Z) - Leveraging Deepfakes to Close the Domain Gap between Real and Synthetic
Images in Facial Capture Pipelines [8.366597450893456]
We propose an end-to-end pipeline for building and tracking 3D facial models from personalized in-the-wild video data.
We present a method for automatic data curation and retrieval based on a hierarchical clustering framework typical of collision algorithms in traditional computer graphics pipelines.
We outline how we train a motion capture regressor, leveraging the aforementioned techniques to avoid the need for real-world ground truth data.
arXiv Detail & Related papers (2022-04-22T15:09:49Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Camera Calibration through Camera Projection Loss [4.36572039512405]
We propose a novel method to predict intrinsic (focal length and principal point offset) parameters using an image pair.
Unlike existing methods, we proposed a new representation that incorporates camera model equations as a neural network in multi-task learning framework.
Our proposed approach achieves better performance with respect to both deep learning-based and traditional methods on 7 out of 10 parameters evaluated.
arXiv Detail & Related papers (2021-10-07T14:03:10Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Meta-Sim2: Unsupervised Learning of Scene Structure for Synthetic Data
Generation [88.04759848307687]
In Meta-Sim2, we aim to learn the scene structure in addition to parameters, which is a challenging problem due to its discrete nature.
We use Reinforcement Learning to train our model, and design a feature space divergence between our synthesized and target images that is key to successful training.
We also show that this leads to downstream improvement in the performance of an object detector trained on our generated dataset as opposed to other baseline simulation methods.
arXiv Detail & Related papers (2020-08-20T17:28:45Z) - Stillleben: Realistic Scene Synthesis for Deep Learning in Robotics [33.30312206728974]
We describe a synthesis pipeline capable of producing training data for cluttered scene perception tasks.
Our approach arranges object meshes in physically realistic, dense scenes using physics simulation.
Our pipeline can be run online during training of a deep neural network.
arXiv Detail & Related papers (2020-05-12T10:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.