Image2Garment: Simulation-ready Garment Generation from a Single Image
- URL: http://arxiv.org/abs/2601.09658v2
- Date: Thu, 15 Jan 2026 21:21:50 GMT
- Title: Image2Garment: Simulation-ready Garment Generation from a Single Image
- Authors: Selim Emir Can, Jan Ackermann, Kiyohiro Nakayama, Ruofan Liu, Tong Wu, Yang Zheng, Hugo Bertiche, Menglei Chai, Thabo Beeler, Gordon Wetzstein,
- Abstract summary: We propose a vision-language model to infer material composition and fabric attributes from real images.<n>We then train a lightweight predictor that maps these attributes to the corresponding physical fabric parameters.<n> Experiments show that our estimator achieves superior accuracy in material composition estimation and fabric attribute prediction.
- Score: 52.37273643091814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimating physically accurate, simulation-ready garments from a single image is challenging due to the absence of image-to-physics datasets and the ill-posed nature of this problem. Prior methods either require multi-view capture and expensive differentiable simulation or predict only garment geometry without the material properties required for realistic simulation. We propose a feed-forward framework that sidesteps these limitations by first fine-tuning a vision-language model to infer material composition and fabric attributes from real images, and then training a lightweight predictor that maps these attributes to the corresponding physical fabric parameters using a small dataset of material-physics measurements. Our approach introduces two new datasets (FTAG and T2P) and delivers simulation-ready garments from a single image without iterative optimization. Experiments show that our estimator achieves superior accuracy in material composition estimation and fabric attribute prediction, and by passing them through our physics parameter estimator, we further achieve higher-fidelity simulations compared to state-of-the-art image-to-garment methods.
Related papers
- MatE: Material Extraction from Single-Image via Geometric Prior [36.8533172704247]
MatE is a novel method for generating tileable PBR materials from a single image taken under unconstrained, real-world conditions.<n>We demonstrate the efficacy and robustness of our approach, enabling users to create realistic materials from real-world image.
arXiv Detail & Related papers (2025-12-20T10:53:49Z) - From images to properties: a NeRF-driven framework for granular material parameter inversion [1.8231854497751137]
We introduce a novel framework that integrates Neural Radiance Fields (NeRF) with Material Point Method (MPM) simulation to infer granular material properties from visual observations.<n>Our results demonstrate that friction angle can be estimated with an error within 2 degrees, highlighting the effectiveness of inverse analysis through purely visual observations.
arXiv Detail & Related papers (2025-07-11T20:15:59Z) - PhysFlow: Unleashing the Potential of Multi-modal Foundation Models and Video Diffusion for 4D Dynamic Physical Scene Simulation [9.306758077479472]
PhysFlow is a novel approach that leverages multi-modal foundation models and video diffusion to achieve enhanced 4D dynamic scene simulation.<n>This integrated framework enables accurate prediction and realistic simulation of dynamic interactions in real-world scenarios.
arXiv Detail & Related papers (2024-11-21T18:55:23Z) - Alchemist: Parametric Control of Material Properties with Diffusion
Models [51.63031820280475]
Our method capitalizes on the generative prior of text-to-image models known for photorealism.
We show the potential application of our model to material edited NeRFs.
arXiv Detail & Related papers (2023-12-05T18:58:26Z) - DiffAvatar: Simulation-Ready Garment Optimization with Differentiable Simulation [27.553678582454648]
Physical simulations can produce realistic motions for clothed humans, but they require high-quality garment assets with associated physical parameters for cloth simulations.
We propose papername,a novel approach that performs body and garment co-optimization using differentiable simulation.
Our experiments demonstrate that our approach generates realistic clothing and body shape suitable for downstream applications.
arXiv Detail & Related papers (2023-11-20T21:20:37Z) - TexPose: Neural Texture Learning for Self-Supervised 6D Object Pose
Estimation [55.94900327396771]
We introduce neural texture learning for 6D object pose estimation from synthetic data.
We learn to predict realistic texture of objects from real image collections.
We learn pose estimation from pixel-perfect synthetic data.
arXiv Detail & Related papers (2022-12-25T13:36:32Z) - Generic Lithography Modeling with Dual-band Optics-Inspired Neural
Networks [52.200624127512874]
We introduce a dual-band optics-inspired neural network design that considers the optical physics underlying lithography.
Our approach yields the first published via/metal layer contour simulation at 1nm2/pixel resolution with any tile size.
We also achieve 85X simulation speedup over traditional lithography simulator with 1% accuracy loss.
arXiv Detail & Related papers (2022-03-12T08:08:50Z) - Task2Sim : Towards Effective Pre-training and Transfer from Synthetic
Data [74.66568380558172]
We study the transferability of pre-trained models based on synthetic data generated by graphics simulators to downstream tasks.
We introduce Task2Sim, a unified model mapping downstream task representations to optimal simulation parameters.
It learns this mapping by training to find the set of best parameters on a set of "seen" tasks.
Once trained, it can then be used to predict best simulation parameters for novel "unseen" tasks in one shot.
arXiv Detail & Related papers (2021-11-30T19:25:27Z) - Visual design intuition: Predicting dynamic properties of beams from raw
cross-section images [6.76432840291023]
We aim to mimic the human ability to acquire the intuition to estimate the performance of a design from visual inspection and experience alone.
We study the ability of convolutional neural networks to predict static and dynamic properties of cantilever beams directly from their raw cross-section images.
arXiv Detail & Related papers (2021-11-14T03:10:15Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.