PICTURE: PhotorealistIC virtual Try-on from UnconstRained dEsigns
- URL: http://arxiv.org/abs/2312.04534v1
- Date: Thu, 7 Dec 2023 18:53:18 GMT
- Title: PICTURE: PhotorealistIC virtual Try-on from UnconstRained dEsigns
- Authors: Shuliang Ning, Duomin Wang, Yipeng Qin, Zirong Jin, Baoyuan Wang,
Xiaoguang Han
- Abstract summary: We propose a novel virtual try-on from unconstrained designs (ucVTON) task to enable synthesis of personalized composite clothing on input human images.
Unlike prior arts constrained by specific input types, our method allows flexible specification of style (text or image) and texture (full garment, cropped sections, or texture patches) conditions.
- Score: 25.209863457090506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel virtual try-on from unconstrained designs
(ucVTON) task to enable photorealistic synthesis of personalized composite
clothing on input human images. Unlike prior arts constrained by specific input
types, our method allows flexible specification of style (text or image) and
texture (full garment, cropped sections, or texture patches) conditions. To
address the entanglement challenge when using full garment images as
conditions, we develop a two-stage pipeline with explicit disentanglement of
style and texture. In the first stage, we generate a human parsing map
reflecting the desired style conditioned on the input. In the second stage, we
composite textures onto the parsing map areas based on the texture input. To
represent complex and non-stationary textures that have never been achieved in
previous fashion editing works, we first propose extracting hierarchical and
balanced CLIP features and applying position encoding in VTON. Experiments
demonstrate superior synthesis quality and personalization enabled by our
method. The flexible control over style and texture mixing brings virtual
try-on to a new level of user experience for online shopping and fashion
design.
Related papers
- FabricDiffusion: High-Fidelity Texture Transfer for 3D Garments Generation from In-The-Wild Clothing Images [56.63824638417697]
FabricDiffusion is a method for transferring fabric textures from a single clothing image to 3D garments of arbitrary shapes.
We show that FabricDiffusion can transfer various features from a single clothing image including texture patterns, material properties, and detailed prints and logos.
arXiv Detail & Related papers (2024-10-02T17:57:12Z) - Compositional Neural Textures [25.885557234297835]
This work introduces a fully unsupervised approach for representing textures using a compositional neural model.
We represent each texton as a 2D Gaussian function whose spatial support approximates its shape, and an associated feature that encodes its detailed appearance.
This approach enables a wide range of applications, including transferring appearance from an image texture to another image, diversifying textures, revealing/modifying texture variations, edit propagation, texture animation, and direct texton manipulation.
arXiv Detail & Related papers (2024-04-18T21:09:34Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - A Two-stage Personalized Virtual Try-on Framework with Shape Control and
Texture Guidance [7.302929117437442]
This paper proposes a brand new personalized virtual try-on model (PE-VITON), which uses the two stages (shape control and texture guidance) to decouple the clothing attributes.
The proposed model can effectively solve the problems of weak reduction of clothing folds, poor generation effect under complex human posture, blurred edges of clothing, and unclear texture styles in traditional try-on methods.
arXiv Detail & Related papers (2023-12-24T13:32:55Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - PISE: Person Image Synthesis and Editing with Decoupled GAN [64.70360318367943]
We propose PISE, a novel two-stage generative model for Person Image Synthesis and Editing.
For human pose transfer, we first synthesize a human parsing map aligned with the target pose to represent the shape of clothing.
To decouple the shape and style of clothing, we propose joint global and local per-region encoding and normalization.
arXiv Detail & Related papers (2021-03-06T04:32:06Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.