Taxonomizing and Measuring Representational Harms: A Look at Image
Tagging
- URL: http://arxiv.org/abs/2305.01776v1
- Date: Tue, 2 May 2023 20:36:30 GMT
- Title: Taxonomizing and Measuring Representational Harms: A Look at Image
Tagging
- Authors: Jared Katzman and Angelina Wang and Morgan Scheuerman and Su Lin
Blodgett and Kristen Laird and Hanna Wallach and Solon Barocas
- Abstract summary: We identify four types of representational harms that can be caused by image tagging systems.
We show that attempts to mitigate some of these types of harms may be in tension with one another.
- Score: 12.576454410948292
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we examine computational approaches for measuring the
"fairness" of image tagging systems, finding that they cluster into five
distinct categories, each with its own analytic foundation. We also identify a
range of normative concerns that are often collapsed under the terms
"unfairness," "bias," or even "discrimination" when discussing problematic
cases of image tagging. Specifically, we identify four types of
representational harms that can be caused by image tagging systems, providing
concrete examples of each. We then consider how different computational
measurement approaches map to each of these types, demonstrating that there is
not a one-to-one mapping. Our findings emphasize that no single measurement
approach will be definitive and that it is not possible to infer from the use
of a particular measurement approach which type of harm was intended to be
measured. Lastly, equipped with this more granular understanding of the types
of representational harms that can be caused by image tagging systems, we show
that attempts to mitigate some of these types of harms may be in tension with
one another.
Related papers
- Interpretable Measures of Conceptual Similarity by
Complexity-Constrained Descriptive Auto-Encoding [112.0878081944858]
Quantifying the degree of similarity between images is a key copyright issue for image-based machine learning.
We seek to define and compute a notion of "conceptual similarity" among images that captures high-level relations.
Two highly dissimilar images can be discriminated early in their description, whereas conceptually dissimilar ones will need more detail to be distinguished.
arXiv Detail & Related papers (2024-02-14T03:31:17Z) - Introspective Deep Metric Learning [91.47907685364036]
We propose an introspective deep metric learning framework for uncertainty-aware comparisons of images.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling.
arXiv Detail & Related papers (2023-09-11T16:21:13Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Self-similarity Driven Scale-invariant Learning for Weakly Supervised
Person Search [66.95134080902717]
We propose a novel one-step framework, named Self-similarity driven Scale-invariant Learning (SSL)
We introduce a Multi-scale Exemplar Branch to guide the network in concentrating on the foreground and learning scale-invariant features.
Experiments on PRW and CUHK-SYSU databases demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2023-02-25T04:48:11Z) - What is Flagged in Uncertainty Quantification? Latent Density Models for
Uncertainty Categorization [68.15353480798244]
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
Recent years have seen a steep rise in UQ methods that can flag suspicious examples.
We propose a framework for categorizing uncertain examples flagged by UQ methods in classification tasks.
arXiv Detail & Related papers (2022-07-11T19:47:00Z) - Measuring Representational Harms in Image Captioning [5.543867614999908]
We present a set of techniques for measuring five types of representational harms, as well as the resulting measurements.
Our goal was not to audit this image captioning system, but rather to develop normatively grounded measurement techniques.
We discuss the assumptions underlying our measurement approach and point out when they do not hold.
arXiv Detail & Related papers (2022-06-14T21:08:01Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - On the Choice of Fairness: Finding Representative Fairness Metrics for a
Given Context [5.667221573173013]
Various notions of fairness have been defined, though choosing an appropriate metric is cumbersome.
Trade-offs and impossibility theorems make such selection even more complicated and controversial.
We propose a framework that automatically discovers the correlations and trade-offs between different pairs of measures for a given context.
arXiv Detail & Related papers (2021-09-13T04:17:38Z) - Contrastive Counterfactual Visual Explanations With Overdetermination [7.8752926274677435]
CLEAR Image is based on the view that a satisfactory explanation should be contrastive, counterfactual and measurable.
CLEAR Image was successfully applied to a medical imaging case study where it outperformed methods such as Grad-CAM and LIME by an average of 27%.
arXiv Detail & Related papers (2021-06-28T10:24:17Z) - Few-Shot Learning with Part Discovery and Augmentation from Unlabeled
Images [79.34600869202373]
We show that inductive bias can be learned from a flat collection of unlabeled images, and instantiated as transferable representations among seen and unseen classes.
Specifically, we propose a novel part-based self-supervised representation learning scheme to learn transferable representations.
Our method yields impressive results, outperforming the previous best unsupervised methods by 7.74% and 9.24%.
arXiv Detail & Related papers (2021-05-25T12:22:11Z) - Measuring Model Biases in the Absence of Ground Truth [2.802021236064919]
We introduce a new framing to the measurement of fairness and bias that does not rely on ground truth labels.
Instead, we treat the model predictions for a given image as a set of labels, analogous to a 'bag of words' approach used in Natural Language Processing (NLP)
We demonstrate how the statistical properties (especially normalization) of the different association metrics can lead to different sets of labels detected as having "gender bias"
arXiv Detail & Related papers (2021-03-05T01:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.