Hairmony: Fairness-aware hairstyle classification
- URL: http://arxiv.org/abs/2410.11528v1
- Date: Tue, 15 Oct 2024 12:00:36 GMT
- Title: Hairmony: Fairness-aware hairstyle classification
- Authors: Givi Meishvili, James Clemoes, Charlie Hewitt, Zafiirah Hosenie, Xian Xiao, Martin de La Gorce, Tibor Takacs, Tadas Baltrusaitis, Antonio Criminisi, Chyna McRae, Nina Jablonski, Marta Wilczkowiak,
- Abstract summary: We present a method for prediction of a person's hairstyle from a single image.
We use only synthetic data to train our models.
We introduce a novel hairstyle taxonomy developed in collaboration with a diverse group of domain experts.
- Score: 10.230933455074634
- License:
- Abstract: We present a method for prediction of a person's hairstyle from a single image. Despite growing use cases in user digitization and enrollment for virtual experiences, available methods are limited, particularly in the range of hairstyles they can capture. Human hair is extremely diverse and lacks any universally accepted description or categorization, making this a challenging task. Most current methods rely on parametric models of hair at a strand level. These approaches, while very promising, are not yet able to represent short, frizzy, coily hair and gathered hairstyles. We instead choose a classification approach which can represent the diversity of hairstyles required for a truly robust and inclusive system. Previous classification approaches have been restricted by poorly labeled data that lacks diversity, imposing constraints on the usefulness of any resulting enrollment system. We use only synthetic data to train our models. This allows for explicit control of diversity of hairstyle attributes, hair colors, facial appearance, poses, environments and other parameters. It also produces noise-free ground-truth labels. We introduce a novel hairstyle taxonomy developed in collaboration with a diverse group of domain experts which we use to balance our training data, supervise our model, and directly measure fairness. We annotate our synthetic training data and a real evaluation dataset using this taxonomy and release both to enable comparison of future hairstyle prediction approaches. We employ an architecture based on a pre-trained feature extraction network in order to improve generalization of our method to real data and predict taxonomy attributes as an auxiliary task to improve accuracy. Results show our method to be significantly more robust for challenging hairstyles than recent parametric approaches.
Related papers
- Quaffure: Real-Time Quasi-Static Neural Hair Simulation [11.869362129320473]
We propose a novel neural approach to predict hair deformations that generalizes to various body poses, shapes, and hairstyles.
Our model is trained using a self-supervised loss, eliminating the need for expensive data generation and storage.
Our approach is highly suitable for real-time applications with an inference time of only a few milliseconds on consumer hardware.
arXiv Detail & Related papers (2024-12-13T11:44:56Z) - HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - A Local Appearance Model for Volumetric Capture of Diverse Hairstyle [15.122893482253069]
Hair plays a significant role in personal identity and appearance, making it an essential component of high-quality, photorealistic avatars.
Existing approaches either focus on modeling the facial region only or rely on personalized models, limiting their generalizability and scalability.
We present a novel method for creating high-fidelity avatars with diverse hairstyles.
arXiv Detail & Related papers (2023-12-14T06:29:59Z) - Self-Supervised Disentanglement by Leveraging Structure in Data Augmentations [63.73044203154743]
Self-supervised representation learning often uses data augmentations to induce "style" attributes of the data.
It is difficult to deduce a priori which attributes of the data are indeed "style" and can be safely discarded.
We introduce a more principled approach that seeks to disentangle style features rather than discard them.
arXiv Detail & Related papers (2023-11-15T09:34:08Z) - Constructing Balance from Imbalance for Long-tailed Image Recognition [50.6210415377178]
The imbalance between majority (head) classes and minority (tail) classes severely skews the data-driven deep neural networks.
Previous methods tackle with data imbalance from the viewpoints of data distribution, feature space, and model design.
We propose a concise paradigm by progressively adjusting label space and dividing the head classes and tail classes.
Our proposed model also provides a feature evaluation method and paves the way for long-tailed feature learning.
arXiv Detail & Related papers (2022-08-04T10:22:24Z) - HairFIT: Pose-Invariant Hairstyle Transfer via Flow-based Hair Alignment
and Semantic-Region-Aware Inpainting [26.688276902813495]
We propose a novel framework for pose-invariant hairstyle transfer, HairFIT.
Our model consists of two stages: 1) flow-based hair alignment and 2) hair synthesis.
Our SIM estimator divides the occluded regions in the source image into different semantic regions to reflect their distinct features during the inpainting.
arXiv Detail & Related papers (2022-06-17T06:55:20Z) - Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning [84.8813842101747]
Contrastive Arbitrary Style Transfer (CAST) is a new style representation learning and style transfer method via contrastive learning.
Our framework consists of three key components, i.e., a multi-layer style projector for style code encoding, a domain enhancement module for effective learning of style distribution, and a generative network for image style transfer.
arXiv Detail & Related papers (2022-05-19T13:11:24Z) - Hair Color Digitization through Imaging and Deep Inverse Graphics [8.605763075773746]
We introduce a novel method for hair color digitization based on inverse graphics and deep neural networks.
Our proposed pipeline allows capturing the color appearance of a physical hair sample and renders synthetic images of hair with a similar appearance.
Our method is based on the combination of a controlled imaging device, a path-tracing rendering, and an inverse graphics model based on self-supervised machine learning.
arXiv Detail & Related papers (2022-02-08T08:57:04Z) - LOHO: Latent Optimization of Hairstyles via Orthogonalization [20.18175263304822]
We propose an optimization-based approach using GAN inversion to infill missing hair structure details in latent space during hairstyle transfer.
Our approach decomposes hair into three attributes: perceptual structure, appearance, and style, and includes tailored losses to model each of these attributes independently.
arXiv Detail & Related papers (2021-03-05T19:00:33Z) - Hidden Footprints: Learning Contextual Walkability from 3D Human Trails [70.01257397390361]
Current datasets only tell you where people are, not where they could be.
We first augment the set of valid, labeled walkable regions by propagating person observations between images, utilizing 3D information to create what we call hidden footprints.
We devise a training strategy designed for such sparse labels, combining a class-balanced classification loss with a contextual adversarial loss.
arXiv Detail & Related papers (2020-08-19T23:19:08Z) - Learning Diverse Fashion Collocation by Neural Graph Filtering [78.9188246136867]
We propose a novel fashion collocation framework, Neural Graph Filtering, that models a flexible set of fashion items via a graph neural network.
By applying symmetric operations on the edge vectors, this framework allows varying numbers of inputs/outputs and is invariant to their ordering.
We evaluate the proposed approach on three popular benchmarks, the Polyvore dataset, the Polyvore-D dataset, and our reorganized Amazon Fashion dataset.
arXiv Detail & Related papers (2020-03-11T16:17:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.