Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation
- URL: http://arxiv.org/abs/2205.03962v1
- Date: Sun, 8 May 2022 22:01:30 GMT
- Title: Towards Racially Unbiased Skin Tone Estimation via Scene Disambiguation
- Authors: Haiwen Feng, Timo Bolkart, Joachim Tesch, Michael J. Black, and
Victoria Abrevaya
- Abstract summary: Virtual facial avatars will play an increasingly important role in immersive communication, games and the metaverse.
Virtual facial avatars will play an increasingly important role in immersive communication, games and the metaverse.
This requires accurate recovery of the appearance, represented by albedo, regardless of age, sex, or ethnicity.
- Score: 48.632358823108326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual facial avatars will play an increasingly important role in immersive
communication, games and the metaverse, and it is therefore critical that they
be inclusive. This requires accurate recovery of the appearance, represented by
albedo, regardless of age, sex, or ethnicity. While significant progress has
been made on estimating 3D facial geometry, albedo estimation has received less
attention. The task is fundamentally ambiguous because the observed color is a
function of albedo and lighting, both of which are unknown. We find that
current methods are biased towards light skin tones due to (1) strongly biased
priors that prefer lighter pigmentation and (2) algorithmic solutions that
disregard the light/albedo ambiguity. To address this, we propose a new
evaluation dataset (FAIR) and an algorithm (TRUST) to improve albedo estimation
and, hence, fairness. Specifically, we create the first facial albedo
evaluation benchmark where subjects are balanced in terms of skin color, and
measure accuracy using the Individual Typology Angle (ITA) metric. We then
address the light/albedo ambiguity by building on a key observation: the image
of the full scene -- as opposed to a cropped image of the face -- contains
important information about lighting that can be used for disambiguation. TRUST
regresses facial albedo by conditioning both on the face region and a global
illumination signal obtained from the scene image. Our experimental results
show significant improvement compared to state-of-the-art methods on albedo
estimation, both in terms of accuracy and fairness. The evaluation benchmark
and code will be made available for research purposes at
https://trust.is.tue.mpg.de.
Related papers
- High-Fidelity Facial Albedo Estimation via Texture Quantization [59.100759403614695]
We present HiFiAlbedo, which recovers the albedo map directly from a single image without the need for captured albedo data.
Our method exhibits excellent generalizability and is capable of achieving high-fidelity results for in-the-wild facial albedo recovery.
arXiv Detail & Related papers (2024-06-19T01:53:30Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Are Face Detection Models Biased? [69.68854430664399]
We investigate possible bias in the domain of face detection through facial region localization.
Most existing face detection datasets lack suitable annotation for such analysis.
We observe a high disparity in detection accuracies across gender and skin-tone, and interplay of confounding factors beyond demography.
arXiv Detail & Related papers (2022-11-07T14:27:55Z) - Face Recognition Accuracy Across Demographics: Shining a Light Into the
Problem [8.02620277513497]
We explore varying face recognition accuracy across demographic groups as a phenomenon partly caused by differences in face illumination.
We show that impostor image pairs with both faces under-exposed, or both overexposed, have an increased false match rate (FMR)
We propose a brightness information metric to measure variation in brightness in the face and show that face brightness that is too low or too high has reduced information in the face region.
arXiv Detail & Related papers (2022-06-04T02:36:35Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - SIRfyN: Single Image Relighting from your Neighbors [14.601975066158394]
We show how to relight a scene depicted in a single image, such that (a) the overall shading has changed and (b) the resulting image looks like a natural image of that scene.
arXiv Detail & Related papers (2021-12-08T17:05:57Z) - It's Written All Over Your Face: Full-Face Appearance-Based Gaze
Estimation [82.16380486281108]
We propose an appearance-based method that only takes the full face image as input.
Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps.
We show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation.
arXiv Detail & Related papers (2016-11-27T15:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.