From Air to Wear: Personalized 3D Digital Fashion with AR/VR Immersive 3D Sketching
- URL: http://arxiv.org/abs/2505.09998v1
- Date: Thu, 15 May 2025 06:22:24 GMT
- Title: From Air to Wear: Personalized 3D Digital Fashion with AR/VR Immersive 3D Sketching
- Authors: Ying Zang, Yuanqi Hu, Xinyu Chen, Yuxia Xu, Suhui Wang, Chunan Yu, Lanyun Zhu, Deyi Ji, Xin Xu, Tianrun Chen,
- Abstract summary: We introduce a 3D sketch-driven 3D garment generation framework that empowers ordinary users to create high-quality digital clothing.<n>By combining a conditional diffusion model, a sketch encoder trained in a shared latent space, and an adaptive curriculum learning strategy, our system interprets imprecise, free-hand input and produces realistic, personalized garments.<n>To address the scarcity of training data, we also introduce KO3DClothes, a new dataset of paired 3D garments and user-created sketches.
- Score: 17.901040166369487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of immersive consumer electronics, such as AR/VR headsets and smart devices, people increasingly seek ways to express their identity through virtual fashion. However, existing 3D garment design tools remain inaccessible to everyday users due to steep technical barriers and limited data. In this work, we introduce a 3D sketch-driven 3D garment generation framework that empowers ordinary users - even those without design experience - to create high-quality digital clothing through simple 3D sketches in AR/VR environments. By combining a conditional diffusion model, a sketch encoder trained in a shared latent space, and an adaptive curriculum learning strategy, our system interprets imprecise, free-hand input and produces realistic, personalized garments. To address the scarcity of training data, we also introduce KO3DClothes, a new dataset of paired 3D garments and user-created sketches. Extensive experiments and user studies confirm that our method significantly outperforms existing baselines in both fidelity and usability, demonstrating its promise for democratized fashion design on next-generation consumer platforms.
Related papers
- Virtual Trial Room with Computer Vision and Machine Learning [0.0]
Customers often hesitate to purchase wearable products due to lack of certainty regarding fit and suitability.<n>A platform called the Virtual Trial Room with Computer Vision and Machine Learning is designed which enables customers to easily check whether a product will fit and suit them or not.<n>An AI-generated 3D model of the human head was created from a single 2D image using the DECA model.<n>This 3D model was then superimposed with a custom-made 3D model of glass which is based on real-world measurements and fitted over the human head.
arXiv Detail & Related papers (2024-12-14T06:50:10Z) - Magic3DSketch: Create Colorful 3D Models From Sketch-Based 3D Modeling Guided by Text and Language-Image Pre-Training [2.9600148687385786]
Traditional methods like Computer-Aided Design (CAD) are often too labor-intensive and skill-demanding, making it challenging for novice users.
Our proposed method, Magic3DSketch, employs a novel technique that encodes sketches to predict a 3D mesh, guided by text descriptions.
Our method is also more useful and offers higher degree of controllability compared to existing text-to-3D approaches.
arXiv Detail & Related papers (2024-07-27T09:59:13Z) - Design2Cloth: 3D Cloth Generation from 2D Masks [34.80461276448817]
We propose Design2Cloth, a high fidelity 3D generative model trained on a real world dataset from more than 2000 subject scans.
Under a series of both qualitative and quantitative experiments, we showcase that Design2Cloth outperforms current state-of-the-art cloth generative models by a large margin.
arXiv Detail & Related papers (2024-04-03T12:32:13Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal Pre-training Paradigm [111.16358607889609]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.<n>For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - Deep3DSketch+: Rapid 3D Modeling from Single Free-hand Sketches [15.426513559370086]
We introduce a novel end-to-end approach, Deep3DSketch+, which performs 3D modeling using only a single free-hand sketch without inputting multiple sketches or view information.
Experiments demonstrated the effectiveness of our approach with the state-of-the-art (SOTA) performance on both synthetic and real datasets.
arXiv Detail & Related papers (2023-09-22T17:12:13Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - HyperStyle3D: Text-Guided 3D Portrait Stylization via Hypernetworks [101.36230756743106]
This paper is inspired by the success of 3D-aware GANs that bridge 2D and 3D domains with 3D fields as the intermediate representation for rendering 2D images.
We propose a novel method, dubbed HyperStyle3D, based on 3D-aware GANs for 3D portrait stylization.
arXiv Detail & Related papers (2023-04-19T07:22:05Z) - Make Your Brief Stroke Real and Stereoscopic: 3D-Aware Simplified Sketch
to Portrait Generation [51.64832538714455]
Existing studies only generate portraits in the 2D plane with fixed views, making the results less vivid.
In this paper, we present Stereoscopic Simplified Sketch-to-Portrait (SSSP), which explores the possibility of creating Stereoscopic 3D-aware portraits.
Our key insight is to design sketch-aware constraints that can fully exploit the prior knowledge of a tri-plane-based 3D-aware generative model.
arXiv Detail & Related papers (2023-02-14T06:28:42Z) - Interactive Sketching of Mannequin Poses [3.222802562733787]
3D body poses are necessary for various downstream applications.
We propose a machine-learning model for inferring the 3D pose of a CG mannequin from sketches of humans drawn in a cylinder-person style.
Our unique approach to vector graphics training data underpins our integrated ML-and-kinematics system.
arXiv Detail & Related papers (2022-12-14T08:45:51Z) - Towards 3D VR-Sketch to 3D Shape Retrieval [128.47604316459905]
We study the use of 3D sketches as an input modality and advocate a VR-scenario where retrieval is conducted.
As a first stab at this new 3D VR-sketch to 3D shape retrieval problem, we make four contributions.
arXiv Detail & Related papers (2022-09-20T22:04:31Z) - Fine-Grained VR Sketching: Dataset and Insights [140.0579567561475]
We present the first fine-grained dataset of 1,497 3D VR sketch and 3D shape pairs of a chair category with large shapes diversity.
Our dataset supports the recent trend in the sketch community on fine-grained data analysis.
arXiv Detail & Related papers (2022-09-20T21:30:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.