What Values Do ImageNet-trained Classifiers Enact?
- URL: http://arxiv.org/abs/2402.04911v1
- Date: Wed, 7 Feb 2024 14:39:09 GMT
- Title: What Values Do ImageNet-trained Classifiers Enact?
- Authors: Will Penman, Joshua Babu, Abhinaya Raghunathan
- Abstract summary: We identify "values" as actions that classifiers take that speak to open questions of significant social concern.
Unlike AI social bias, however, a classifier's values are not necessarily morally loathsome.
Our findings bring a rich sense of the social world to ML researchers that can be applied to other domains beyond computer vision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We identify "values" as actions that classifiers take that speak to open
questions of significant social concern. Investigating a classifier's values
builds on studies of social bias that uncover how classifiers participate in
social processes beyond their creators' forethought. In our case, this
participation involves what counts as nutritious, what it means to be modest,
and more. Unlike AI social bias, however, a classifier's values are not
necessarily morally loathsome. Attending to image classifiers' values can
facilitate public debate and introspection about the future of society. To
substantiate these claims, we report on an extensive examination of both
ImageNet training/validation data and ImageNet-trained classifiers with custom
testing data. We identify perceptual decision boundaries in 118 categories that
address open questions in society, and through quantitative testing of rival
datasets we find that ImageNet-trained classifiers enact at least 7 values
through their perceptual decisions. To contextualize these results, we develop
a conceptual framework that integrates values, social bias, and accuracy, and
we describe a rhetorical method for identifying how context affects the values
that a classifier enacts. We also discover that classifier performance does not
straightforwardly reflect the proportions of subgroups in a training set. Our
findings bring a rich sense of the social world to ML researchers that can be
applied to other domains beyond computer vision.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - State-of-the-Art in Nudity Classification: A Comparative Analysis [5.76536165136814]
This paper presents a comparative analysis of existing nudity classification techniques for classifying images based on the presence of nudity.
The study identifies the limitations of current evaluation datasets and highlights the need for more diverse and challenging datasets.
Overall, the study emphasizes the importance of continually improving image classification models to ensure the safety and well-being of platform users.
arXiv Detail & Related papers (2023-12-26T21:24:55Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - Towards Reliable Assessments of Demographic Disparities in Multi-Label
Image Classifiers [11.973749734226852]
We consider multi-label image classification and, specifically, object categorization tasks.
Design choices and trade-offs for measurement involve more nuance than discussed in prior computer vision literature.
We identify several design choices that look merely like implementation details but significantly impact the conclusions of assessments.
arXiv Detail & Related papers (2023-02-16T20:34:54Z) - Improving Fairness in Large-Scale Object Recognition by CrowdSourced
Demographic Information [7.968124582214686]
Representing objects fairly in machine learning datasets will lead to models that are less biased towards a particular culture.
We propose a simple and general approach, based on crowdsourcing the demographic composition of the contributors.
We present analysis which leads to a much fairer coverage of the world compared to existing datasets.
arXiv Detail & Related papers (2022-06-02T22:55:10Z) - Learning to Adapt Domain Shifts of Moral Values via Instance Weighting [74.94940334628632]
Classifying moral values in user-generated text from social media is critical to understanding community cultures.
Moral values and language usage can change across the social movements.
We propose a neural adaptation framework via instance weighting to improve cross-domain classification tasks.
arXiv Detail & Related papers (2022-04-15T18:15:41Z) - Evaluating Adversarial Attacks on ImageNet: A Reality Check on
Misclassification Classes [3.0128052969792605]
We investigate the nature of the classes into which adversarial examples are misclassified in ImageNet.
We find that $71%$ of the adversarial examples that achieve model-to-model adversarial transferability are misclassified into one of the top-5 classes.
We also find that a large subset of untargeted misclassifications are, in fact, misclassifications into semantically similar classes.
arXiv Detail & Related papers (2021-11-22T08:54:34Z) - Enriching ImageNet with Human Similarity Judgments and Psychological
Embeddings [7.6146285961466]
We introduce a dataset that embodies the task-general capabilities of human perception and reasoning.
The Human Similarity Judgments extension to ImageNet (ImageNet-HSJ) is composed of human similarity judgments.
The new dataset supports a range of task and performance metrics, including the evaluation of unsupervised learning algorithms.
arXiv Detail & Related papers (2020-11-22T13:41:54Z) - Quantifying Learnability and Describability of Visual Concepts Emerging
in Representation Learning [91.58529629419135]
We consider how to characterise visual groupings discovered automatically by deep neural networks.
We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings.
arXiv Detail & Related papers (2020-10-27T18:41:49Z) - I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively [135.7695909882746]
We name the MAximum Discrepancy (MAD) competition.
We adaptively sample a small test set from an arbitrarily large corpus of unlabeled images.
Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers.
arXiv Detail & Related papers (2020-02-25T03:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.