Digital Divides in Scene Recognition: Uncovering Socioeconomic Biases in
Deep Learning Systems
- URL: http://arxiv.org/abs/2401.13097v1
- Date: Tue, 23 Jan 2024 21:22:06 GMT
- Title: Digital Divides in Scene Recognition: Uncovering Socioeconomic Biases in
Deep Learning Systems
- Authors: Michelle R. Greene, Mariam Josyula, Wentao Si and Jennifer A. Hart
- Abstract summary: We investigate the biases of deep convolutional neural networks (dCNNs) in scene classification.
We use nearly one million images from global and US sources, including user-submitted home photographs and Airbnb listings.
Our analyses revealed significant socioeconomic bias, where pretrained dCNNs demonstrated lower classification accuracy, lower classification confidence, and a higher tendency to assign labels that could be offensive.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer-based scene understanding has influenced fields ranging from urban
planning to autonomous vehicle performance, yet little is known about how well
these technologies work across social differences. We investigate the biases of
deep convolutional neural networks (dCNNs) in scene classification, using
nearly one million images from global and US sources, including user-submitted
home photographs and Airbnb listings. We applied statistical models to quantify
the impact of socioeconomic indicators such as family income, Human Development
Index (HDI), and demographic factors from public data sources (CIA and US
Census) on dCNN performance. Our analyses revealed significant socioeconomic
bias, where pretrained dCNNs demonstrated lower classification accuracy, lower
classification confidence, and a higher tendency to assign labels that could be
offensive when applied to homes (e.g., "ruin", "slum"), especially in images
from homes with lower socioeconomic status (SES). This trend is consistent
across two datasets of international images and within the diverse economic and
racial landscapes of the United States. This research contributes to
understanding biases in computer vision, emphasizing the need for more
inclusive and representative training datasets. By mitigating the bias in the
computer vision pipelines, we can ensure fairer and more equitable outcomes for
applied computer vision, including home valuation and smart home security
systems. There is urgency in addressing these biases, which can significantly
impact critical decisions in urban development and resource allocation. Our
findings also motivate the development of AI systems that better understand and
serve diverse communities, moving towards technology that equitably benefits
all sectors of society.
Related papers
- Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - The Emerging AI Divide in the United States [2.0359927301080116]
This study characterizes spatial differences in U.S. residents' knowledge of a new generative AI tool, ChatGPT.
We observe the highest rates of users searching for ChatGPT in West Coast states and persistently low rates of search in Appalachian and Gulf states.
Although generative AI technologies may be novel, early differences in uptake appear to be following familiar paths of digital marginalization.
arXiv Detail & Related papers (2024-04-18T08:33:35Z) - Evaluating Machine Perception of Indigeneity: An Analysis of ChatGPT's
Perceptions of Indigenous Roles in Diverse Scenarios [0.0]
This work offers a unique perspective on how technology perceives and potentially amplifies societal biases related to indigeneity in social computing.
The findings offer insights into the broader implications of indigeneity in critical computing.
arXiv Detail & Related papers (2023-10-13T16:46:23Z) - Granularity at Scale: Estimating Neighborhood Socioeconomic Indicators
from High-Resolution Orthographic Imagery and Hybrid Learning [1.8369448205408005]
Overhead images can help fill in the gaps where community information is sparse.
Recent advancements in machine learning and computer vision have made it possible to quickly extract features from and detect patterns in image data.
In this work, we explore how well two approaches, a supervised convolutional neural network and semi-supervised clustering can estimate population density, median household income, and educational attainment.
arXiv Detail & Related papers (2023-09-28T19:30:26Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Investigating Participation Mechanisms in EU Code Week [68.8204255655161]
Digital competence (DC) is a broad set of skills, attitudes, and knowledge for confident, critical and use of digital technologies.
The aim of the manuscript is to offer a detailed and comprehensive statistical description of Code Week's participation in the EU Member States.
arXiv Detail & Related papers (2022-05-29T19:16:03Z) - Fairness Indicators for Systematic Assessments of Visual Feature
Extractors [21.141633753573764]
We propose three fairness indicators, which aim at quantifying harms and biases of visual systems.
Our indicators use existing publicly available datasets collected for fairness evaluations.
These indicators are not intended to be a substitute for a thorough analysis of the broader impact of the new computer vision technologies.
arXiv Detail & Related papers (2022-02-15T17:45:33Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.