DL-EWF: Deep Learning Empowering Women's Fashion with Grounded-Segment-Anything Segmentation for Body Shape Classification
- URL: http://arxiv.org/abs/2404.04891v1
- Date: Sun, 7 Apr 2024 09:17:00 GMT
- Title: DL-EWF: Deep Learning Empowering Women's Fashion with Grounded-Segment-Anything Segmentation for Body Shape Classification
- Authors: Fatemeh Asghari, Mohammad Reza Soheili, Faezeh Gholamrezaie,
- Abstract summary: One of the most pressing challenges in the fashion industry is the mismatch between body shapes and the garments of individuals they purchase.
Traditional methods for determining human body shape are limited due to their low accuracy, high costs, and time-consuming nature.
New approaches, utilizing digital imaging and deep neural networks (DNN), have been introduced to identify human body shape.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The global fashion industry plays a pivotal role in the global economy, and addressing fundamental issues within the industry is crucial for developing innovative solutions. One of the most pressing challenges in the fashion industry is the mismatch between body shapes and the garments of individuals they purchase. This issue is particularly prevalent among individuals with non-ideal body shapes, exacerbating the challenges faced. Considering inter-individual variability in body shapes is essential for designing and producing garments that are widely accepted by consumers. Traditional methods for determining human body shape are limited due to their low accuracy, high costs, and time-consuming nature. New approaches, utilizing digital imaging and deep neural networks (DNN), have been introduced to identify human body shape. In this study, the Style4BodyShape dataset is used for classifying body shapes into five categories: Rectangle, Triangle, Inverted Triangle, Hourglass, and Apple. In this paper, the body shape segmentation of a person is extracted from the image, disregarding the surroundings and background. Then, Various pre-trained models, such as ResNet18, ResNet34, ResNet50, VGG16, VGG19, and Inception v3, are used to classify the segmentation results. Among these pre-trained models, the Inception V3 model demonstrates superior performance regarding f1-score evaluation metric and accuracy compared to the other models.
Related papers
- Leveraging Anthropometric Measurements to Improve Human Mesh Estimation and Ensure Consistent Body Shapes [12.932412290302258]
A2B is a model that converts anthropometric measurements to body shape parameters of human mesh models.
We show that finetuned SOTA 3D human pose estimation (HPE) models outperform HME models regarding the precision of the estimated keypoints.
We also show that replacing HME models estimates of the body shape parameters with A2B model results not only increases the performance of these HME models, but also leads to consistent body shapes.
arXiv Detail & Related papers (2024-09-26T09:30:37Z) - HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - ShapeBoost: Boosting Human Shape Estimation with Part-Based
Parameterization and Clothing-Preserving Augmentation [58.50613393500561]
We propose ShapeBoost, a new human shape recovery framework.
It achieves pixel-level alignment even for rare body shapes and high accuracy for people wearing different types of clothes.
arXiv Detail & Related papers (2024-03-02T23:40:23Z) - Learning Clothing and Pose Invariant 3D Shape Representation for
Long-Term Person Re-Identification [16.797826602710035]
We aim to extend LT-ReID beyond pedestrian recognition to include a wider range of real-world human activities.
This setting poses additional challenges due to the geometric misalignment and appearance ambiguity caused by the diversity of human pose and clothing.
We propose a new approach 3DInvarReID for disentangling identity from non-identity components.
arXiv Detail & Related papers (2023-08-21T11:51:46Z) - Human Body Shape Classification Based on a Single Image [1.3764085113103217]
We present a methodology to classify human body shape from a single image.
The proposed methodology does not require 3D body recreation as a result of classification.
The resultant body shape classification can be utilised in a variety of downstream tasks.
arXiv Detail & Related papers (2023-05-29T11:47:43Z) - ALiSNet: Accurate and Lightweight Human Segmentation Network for Fashion
E-Commerce [57.876602177247534]
Smartphones provide a convenient way for users to capture images of their body.
We create a new segmentation model by simplifying Semantic FPN with PointRend.
We finetune this model on a high-quality dataset of humans in a restricted set of poses relevant for our application.
arXiv Detail & Related papers (2023-04-15T11:06:32Z) - FitGAN: Fit- and Shape-Realistic Generative Adversarial Networks for
Fashion [5.478764356647437]
We present FitGAN, a generative adversarial model that accounts for garments' entangled size and fit characteristics at scale.
Our model learns disentangled item representations and generates realistic images reflecting the true fit and shape properties of fashion articles.
arXiv Detail & Related papers (2022-06-23T15:10:28Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - LEAP: Learning Articulated Occupancy of People [56.35797895609303]
We introduce LEAP (LEarning Articulated occupancy of People), a novel neural occupancy representation of the human body.
Given a set of bone transformations and a query point in space, LEAP first maps the query point to a canonical space via learned linear blend skinning (LBS) functions.
LEAP efficiently queries the occupancy value via an occupancy network that models accurate identity- and pose-dependent deformations in the canonical space.
arXiv Detail & Related papers (2021-04-14T13:41:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.