Exploring Gender and Race Biases in the NFT Market
- URL: http://arxiv.org/abs/2304.06484v1
- Date: Wed, 29 Mar 2023 17:38:11 GMT
- Title: Exploring Gender and Race Biases in the NFT Market
- Authors: Howard Zhong, Mark Hamilton
- Abstract summary: Non-Fungible Tokens (NFTs) are non-interchangeable assets, usually digital art, which are stored on the blockchain.
Preliminary studies find that female and darker-skinned NFTs are valued less than their male and lighter-skinned counterparts.
We test the statistical significance of race and gender biases in the prices of CryptoPunks and present the first study of gender bias in the broader NFT market.
- Score: 0.7568479331927529
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-Fungible Tokens (NFTs) are non-interchangeable assets, usually digital
art, which are stored on the blockchain. Preliminary studies find that female
and darker-skinned NFTs are valued less than their male and lighter-skinned
counterparts. However, these studies analyze only the CryptoPunks collection.
We test the statistical significance of race and gender biases in the prices of
CryptoPunks and present the first study of gender bias in the broader NFT
market. We find evidence of racial bias but not gender bias. Our work also
introduces a dataset of gender-labeled NFT collections to advance the broader
study of social equity in this emerging market.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - The Impact of Debiasing on the Performance of Language Models in
Downstream Tasks is Underestimated [70.23064111640132]
We compare the impact of debiasing on performance across multiple downstream tasks using a wide-range of benchmark datasets.
Experiments show that the effects of debiasing are consistently emphunderestimated across all tasks.
arXiv Detail & Related papers (2023-09-16T20:25:34Z) - On the Mechanics of NFT Valuation: AI Ethics and Social Media [9.0270684890377]
We study how sentiments in social media, together with gender and skin tone, contribute to NFT valuations.
Although people's attitudes towards Cryptopunks are primarily positive, our findings reflect imbalances in transaction activities and pricing based on gender and skin tone.
arXiv Detail & Related papers (2023-07-13T03:12:00Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - A Survey on Gender Bias in Natural Language Processing [22.91475787277623]
We present a survey of 304 papers on gender bias in natural language processing.
We compare and contrast approaches to detecting and mitigating gender bias.
We find that research on gender bias suffers from four core limitations.
arXiv Detail & Related papers (2021-12-28T14:54:18Z) - Evaluating Gender Bias in Natural Language Inference [5.034017602990175]
We propose an evaluation methodology to measure gender bias in natural language understanding through inference.
We use our challenge task to investigate state-of-the-art NLI models on the presence of gender stereotypes using occupations.
Our findings suggest that three models trained on MNLI and SNLI datasets are significantly prone to gender-induced prediction errors.
arXiv Detail & Related papers (2021-05-12T09:41:51Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z) - Do Neural Ranking Models Intensify Gender Bias? [13.37092521347171]
We first provide a bias measurement framework which includes two metrics to quantify the degree of the unbalanced presence of gender-related concepts in a given IR model's ranking list.
Applying these queries to the MS MARCO Passage retrieval collection, we then measure the gender bias of a BM25 model and several recent neural ranking models.
Results show that while all models are strongly biased toward male, the neural models, and in particular the ones based on contextualized embedding models, significantly intensify gender bias.
arXiv Detail & Related papers (2020-05-01T13:31:11Z) - Unsupervised Discovery of Implicit Gender Bias [38.59057512390926]
We take an unsupervised approach to identifying gender bias against women at a comment level.
Our main challenge is forcing the model to focus on signs of implicit bias, rather than other artifacts in the data.
arXiv Detail & Related papers (2020-04-17T17:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.