Fairness Indicators for Systematic Assessments of Visual Feature
Extractors
- URL: http://arxiv.org/abs/2202.07603v1
- Date: Tue, 15 Feb 2022 17:45:33 GMT
- Title: Fairness Indicators for Systematic Assessments of Visual Feature
Extractors
- Authors: Priya Goyal, Adriana Romero Soriano, Caner Hazirbas, Levent Sagun,
Nicolas Usunier
- Abstract summary: We propose three fairness indicators, which aim at quantifying harms and biases of visual systems.
Our indicators use existing publicly available datasets collected for fairness evaluations.
These indicators are not intended to be a substitute for a thorough analysis of the broader impact of the new computer vision technologies.
- Score: 21.141633753573764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Does everyone equally benefit from computer vision systems? Answers to this
question become more and more important as computer vision systems are deployed
at large scale, and can spark major concerns when they exhibit vast performance
discrepancies between people from various demographic and social backgrounds.
Systematic diagnosis of fairness, harms, and biases of computer vision systems
is an important step towards building socially responsible systems. To initiate
an effort towards standardized fairness audits, we propose three fairness
indicators, which aim at quantifying harms and biases of visual systems. Our
indicators use existing publicly available datasets collected for fairness
evaluations, and focus on three main types of harms and bias identified in the
literature, namely harmful label associations, disparity in learned
representations of social and demographic traits, and biased performance on
geographically diverse images from across the world.We define precise
experimental protocols applicable to a wide range of computer vision models.
These indicators are part of an ever-evolving suite of fairness probes and are
not intended to be a substitute for a thorough analysis of the broader impact
of the new computer vision technologies. Yet, we believe it is a necessary
first step towards (1) facilitating the widespread adoption and mandate of the
fairness assessments in computer vision research, and (2) tracking progress
towards building socially responsible models. To study the practical
effectiveness and broad applicability of our proposed indicators to any visual
system, we apply them to off-the-shelf models built using widely adopted model
training paradigms which vary in their ability to whether they can predict
labels on a given image or only produce the embeddings. We also systematically
study the effect of data domain and model size.
Related papers
- When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - Synthetic Counterfactual Faces [1.3062016289815055]
We build a generative AI framework to construct targeted, counterfactual, high-quality synthetic face data.
Our pipeline has many use cases, including face recognition systems sensitivity evaluations and image understanding system probes.
We showcase the efficacy of our face generation pipeline on a leading commercial vision model.
arXiv Detail & Related papers (2024-07-18T22:22:49Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Towards Reliable Assessments of Demographic Disparities in Multi-Label
Image Classifiers [11.973749734226852]
We consider multi-label image classification and, specifically, object categorization tasks.
Design choices and trade-offs for measurement involve more nuance than discussed in prior computer vision literature.
We identify several design choices that look merely like implementation details but significantly impact the conclusions of assessments.
arXiv Detail & Related papers (2023-02-16T20:34:54Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.
We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.
Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.