How Could Equality and Data Protection Law Shape AI Fairness for People
with Disabilities?
- URL: http://arxiv.org/abs/2107.05704v1
- Date: Mon, 12 Jul 2021 19:41:01 GMT
- Title: How Could Equality and Data Protection Law Shape AI Fairness for People
with Disabilities?
- Authors: Reuben Binns, Reuben Kirkham
- Abstract summary: This article examines the concept of 'AI fairness' for people with disabilities from the perspective of data protection and equality law.
We argue that there is a need for a distinctive approach to AI fairness, due to the different ways in which discrimination and data protection law applies in respect of Disability.
- Score: 14.694420183754332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article examines the concept of 'AI fairness' for people with
disabilities from the perspective of data protection and equality law. This
examination demonstrates that there is a need for a distinctive approach to AI
fairness that is fundamentally different to that used for other protected
characteristics, due to the different ways in which discrimination and data
protection law applies in respect of Disability. We articulate this new agenda
for AI fairness for people with disabilities, explaining how combining data
protection and equality law creates new opportunities for disabled people's
organisations and assistive technology researchers alike to shape the use of
AI, as well as to challenge potential harmful uses.
Related papers
- Accessibility Considerations in the Development of an AI Action Plan [10.467658828071057]
We argue that there is a need for Accessibility to be represented in several important domains.
Data security and privacy and privacy risks including data collected by AI based accessibility technologies.
Disability-specific AI risks and biases including both direct bias (during AI use by the disabled person) and indirect bias (when AI is used by someone else on data relating to a disabled person)
arXiv Detail & Related papers (2025-03-14T21:57:23Z) - Disability data futures: Achievable imaginaries for AI and disability data justice [2.0549239024359762]
Data are the medium through which individuals' identities are filtered in contemporary states and systems.
The history of data and AI is often one of disability exclusion, oppression, and the reduction of disabled experience.
This chapter brings together four academics and disability advocates to describe achievable imaginaries for artificial intelligence and disability data justice.
arXiv Detail & Related papers (2024-11-06T13:04:29Z) - Implications of the AI Act for Non-Discrimination Law and Algorithmic Fairness [1.5029560229270191]
The topic of fairness in AI has sparked meaningful discussions in the past years.
From a legal perspective, many open questions remain.
The AI Act might present a tremendous step towards bridging these two approaches.
arXiv Detail & Related papers (2024-03-29T09:54:09Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Identifying, measuring, and mitigating individual unfairness for
supervised learning models and application to credit risk models [3.818578543491318]
We focus on identifying and mitigating individual unfairness in AI solutions.
We also investigate the extent to which techniques for achieving individual fairness are effective at achieving group fairness.
Some experimental results corresponding to the individual unfairness mitigation techniques are presented.
arXiv Detail & Related papers (2022-11-11T10:20:46Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Tackling Algorithmic Disability Discrimination in the Hiring Process: An
Ethical, Legal and Technical Analysis [2.294014185517203]
We discuss concerns and opportunities raised by AI-driven hiring in relation to disability discrimination.
We establish some starting points and design a roadmap for ethicists, lawmakers, advocates as well as AI practitioners alike.
arXiv Detail & Related papers (2022-06-13T13:32:37Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.