PersonalTailor: Personalizing 2D Pattern Design from 3D Garment Point
Clouds
- URL: http://arxiv.org/abs/2303.09695v2
- Date: Fri, 11 Aug 2023 20:07:48 GMT
- Title: PersonalTailor: Personalizing 2D Pattern Design from 3D Garment Point
Clouds
- Authors: Sauradip Nag, Anran Qi, Xiatian Zhu and Ariel Shamir
- Abstract summary: Garment pattern design aims to convert a 3D garment to the corresponding 2D panels and their sewing structure.
PersonalTailor is a personalized 2D pattern design method, where the user can input specific constraints or demands.
It first learns a multi-modal panel embeddings based on unsupervised cross-modal association and attentive fusion.
It then predicts a binary panel masks individually using a transformer encoder-decoder framework.
- Score: 59.617014796845865
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Garment pattern design aims to convert a 3D garment to the corresponding 2D
panels and their sewing structure. Existing methods rely either on template
fitting with heuristics and prior assumptions, or on model learning with
complicated shape parameterization. Importantly, both approaches do not allow
for personalization of the output garment, which today has increasing demands.
To fill this demand, we introduce PersonalTailor: a personalized 2D pattern
design method, where the user can input specific constraints or demands (in
language or sketch) for personal 2D panel fabrication from 3D point clouds.
PersonalTailor first learns a multi-modal panel embeddings based on
unsupervised cross-modal association and attentive fusion. It then predicts a
binary panel masks individually using a transformer encoder-decoder framework.
Extensive experiments show that our PersonalTailor excels on both personalized
and standard pattern fabrication tasks.
Related papers
- DreamVTON: Customizing 3D Virtual Try-on with Personalized Diffusion Models [56.55549019625362]
Image-based 3D Virtual Try-ON (VTON) aims to sculpt the 3D human according to person and clothes images.
Recent text-to-3D methods achieve remarkable improvement in high-fidelity 3D human generation.
We propose a novel customizing 3D human try-on model, named textbfDreamVTON, to separately optimize the geometry and texture of the 3D human.
arXiv Detail & Related papers (2024-07-23T14:25:28Z) - Design2Cloth: 3D Cloth Generation from 2D Masks [34.80461276448817]
We propose Design2Cloth, a high fidelity 3D generative model trained on a real world dataset from more than 2000 subject scans.
Under a series of both qualitative and quantitative experiments, we showcase that Design2Cloth outperforms current state-of-the-art cloth generative models by a large margin.
arXiv Detail & Related papers (2024-04-03T12:32:13Z) - PointSeg: A Training-Free Paradigm for 3D Scene Segmentation via Foundation Models [51.24979014650188]
We present PointSeg, a training-free paradigm that leverages off-the-shelf vision foundation models to address 3D scene perception tasks.
PointSeg can segment anything in 3D scene by acquiring accurate 3D prompts to align their corresponding pixels across frames.
Our approach significantly surpasses the state-of-the-art specialist training-free model by 14.1$%$, 12.3$%$, and 12.6$%$ mAP on ScanNet, ScanNet++, and KITTI-360 datasets.
arXiv Detail & Related papers (2024-03-11T03:28:20Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - Cross-Modal 3D Shape Generation and Manipulation [62.50628361920725]
We propose a generic multi-modal generative model that couples the 2D modalities and implicit 3D representations through shared latent spaces.
We evaluate our framework on two representative 2D modalities of grayscale line sketches and rendered color images.
arXiv Detail & Related papers (2022-07-24T19:22:57Z) - Sketch2PQ: Freeform Planar Quadrilateral Mesh Design via a Single Sketch [36.10997511325458]
We present a novel sketch-based system to bridge the concept design and digital modeling of freeform roof-like shapes.
Our system allows the user to sketch the surface boundary and contour lines under axonometric projection.
We propose a deep neural network to infer in real-time the underlying surface shape along with a dense conjugate direction field.
arXiv Detail & Related papers (2022-01-23T21:09:59Z) - Generating Datasets of 3D Garments with Sewing Patterns [10.729374293332281]
We create the first large-scale synthetic dataset of 3D garment models with their sewing patterns.
The dataset contains more than 20000 garment design variations produced from 19 different base types.
arXiv Detail & Related papers (2021-09-12T23:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.