Fairness for Unobserved Characteristics: Insights from Technological
Impacts on Queer Communities
- URL: http://arxiv.org/abs/2102.04257v2
- Date: Tue, 9 Feb 2021 21:04:58 GMT
- Title: Fairness for Unobserved Characteristics: Insights from Technological
Impacts on Queer Communities
- Authors: Nenad Tomasev, Kevin R. McKee, Jackie Kay, Shakir Mohamed
- Abstract summary: Sexual orientation and gender identity are prototypical instances of unobserved characteristics.
New approaches for algorithmic fairness break away from the prevailing assumption of observed characteristics.
- Score: 7.485814345656486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in algorithmic fairness have largely omitted sexual orientation and
gender identity. We explore queer concerns in privacy, censorship, language,
online safety, health, and employment to study the positive and negative
effects of artificial intelligence on queer communities. These issues
underscore the need for new directions in fairness research that take into
account a multiplicity of considerations, from privacy preservation, context
sensitivity and process fairness, to an awareness of sociotechnical impact and
the increasingly important role of inclusive and participatory research
processes. Most current approaches for algorithmic fairness assume that the
target characteristics for fairness--frequently, race and legal gender--can be
observed or recorded. Sexual orientation and gender identity are prototypical
instances of unobserved characteristics, which are frequently missing, unknown
or fundamentally unmeasurable. This paper highlights the importance of
developing new approaches for algorithmic fairness that break away from the
prevailing assumption of observed characteristics.
Related papers
- A Tutorial On Intersectionality in Fair Rankings [1.4883782513177093]
biases can lead to discriminatory outcomes in a data-driven world.
Efforts towards responsible data science and responsible artificial intelligence aim to mitigate these biases.
arXiv Detail & Related papers (2025-02-07T21:14:21Z) - On the "Illusion" of Gender Bias in Face Recognition: Explaining the Fairness Issue Through Non-demographic Attributes [7.602456562464879]
Face recognition systems exhibit significant accuracy differences based on the user's gender.
We propose a toolchain to effectively decorrelate and aggregate facial attributes to enable a less-biased gender analysis.
Experiments show that the gender gap vanishes when images of male and female subjects share specific attributes.
arXiv Detail & Related papers (2025-01-21T10:21:19Z) - Toward Fairer Face Recognition Datasets [69.04239222633795]
Face recognition and verification are computer vision tasks whose performance has progressed with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development.
We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets.
arXiv Detail & Related papers (2024-06-24T12:33:21Z) - Fair Models in Credit: Intersectional Discrimination and the
Amplification of Inequity [5.333582981327497]
The authors demonstrate the impact of such algorithmic bias in the microfinance context.
We find that in addition to legally protected characteristics, sensitive attributes such as single parent status and number of children can result in imbalanced harm.
arXiv Detail & Related papers (2023-08-01T10:34:26Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.