Evaluating Machine Perception of Indigeneity: An Analysis of ChatGPT's
Perceptions of Indigenous Roles in Diverse Scenarios
- URL: http://arxiv.org/abs/2310.09237v1
- Date: Fri, 13 Oct 2023 16:46:23 GMT
- Title: Evaluating Machine Perception of Indigeneity: An Analysis of ChatGPT's
Perceptions of Indigenous Roles in Diverse Scenarios
- Authors: Cecilia Delgado Solorzano, Carlos Toxtli Hernandez
- Abstract summary: This work offers a unique perspective on how technology perceives and potentially amplifies societal biases related to indigeneity in social computing.
The findings offer insights into the broader implications of indigeneity in critical computing.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs), like ChatGPT, are fundamentally tools trained
on vast data, reflecting diverse societal impressions. This paper aims to
investigate LLMs' self-perceived bias concerning indigeneity when simulating
scenarios of indigenous people performing various roles. Through generating and
analyzing multiple scenarios, this work offers a unique perspective on how
technology perceives and potentially amplifies societal biases related to
indigeneity in social computing. The findings offer insights into the broader
implications of indigeneity in critical computing.
Related papers
- Balancing the Scales: Enhancing Fairness in Facial Expression Recognition with Latent Alignment [5.784550537553534]
This workleverages representation learning based on latent spaces to mitigate bias in facial expression recognition systems.
It also enhances a deep learning model's fairness and overall accuracy.
arXiv Detail & Related papers (2024-10-25T10:03:10Z) - Digital Divides in Scene Recognition: Uncovering Socioeconomic Biases in
Deep Learning Systems [0.0]
We investigate the biases of deep convolutional neural networks (dCNNs) in scene classification.
We use nearly one million images from global and US sources, including user-submitted home photographs and Airbnb listings.
Our analyses revealed significant socioeconomic bias, where pretrained dCNNs demonstrated lower classification accuracy, lower classification confidence, and a higher tendency to assign labels that could be offensive.
arXiv Detail & Related papers (2024-01-23T21:22:06Z) - Survey of Social Bias in Vision-Language Models [65.44579542312489]
Survey aims to provide researchers with a high-level insight into the similarities and differences of social bias studies in pre-trained models across NLP, CV, and VL.
The findings and recommendations presented here can benefit the ML community, fostering the development of fairer and non-biased AI models.
arXiv Detail & Related papers (2023-09-24T15:34:56Z) - Machine-assisted quantitizing designs: augmenting humanities and social sciences with artificial intelligence [0.0]
Large language models (LLMs) have been shown to present an unprecedented opportunity to scale up data analytics in the humanities and social sciences.
We build on mixed methods quantitizing and converting design principles, and feature analysis from linguistics, to transparently integrate human expertise and machine scalability.
The approach is discussed and demonstrated in over a dozen LLM-assisted case studies, covering 9 diverse languages, multiple disciplines and tasks.
arXiv Detail & Related papers (2023-09-24T14:21:50Z) - Fine-grained Affective Processing Capabilities Emerging from Large
Language Models [7.17010996725842]
We explore ChatGPT's zero-shot ability to perform affective computing tasks using prompting alone.
We show that ChatGPT a) performs meaningful sentiment analysis in the Valence, Arousal and Dominance dimensions, b) has meaningful emotion representations in terms of emotion categories, and c) can perform basic appraisal-based emotion elicitation of situations.
arXiv Detail & Related papers (2023-09-04T15:32:47Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Towards Fair and Explainable AI using a Human-Centered AI Approach [5.888646114353372]
We present 5 research projects that aim to enhance explainability and fairness in classification systems and word embeddings.
The first project explores the utility/downsides of introducing local model explanations as interfaces for machine teachers.
The second project presents D-BIAS, a causality-based human-in-the-loop visual tool for identifying and mitigating social biases in datasets.
The third project presents WordBias, a visual interactive tool that helps audit pre-trained static word embeddings for biases against groups.
The fourth project presents DramatVis Personae, a visual analytics tool that helps identify social
arXiv Detail & Related papers (2023-06-12T21:08:55Z) - Stable Bias: Analyzing Societal Representations in Diffusion Models [72.27121528451528]
We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
arXiv Detail & Related papers (2023-03-20T19:32:49Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - Two-Faced Humans on Twitter and Facebook: Harvesting Social Multimedia
for Human Personality Profiling [74.83957286553924]
We infer the Myers-Briggs Personality Type indicators by applying a novel multi-view fusion framework, called "PERS"
Our experimental results demonstrate the PERS's ability to learn from multi-view data for personality profiling by efficiently leveraging on the significantly different data arriving from diverse social multimedia sources.
arXiv Detail & Related papers (2021-06-20T10:48:49Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.