Designing for Human Rights in AI
- URL: http://arxiv.org/abs/2005.04949v2
- Date: Mon, 6 Jul 2020 17:00:46 GMT
- Title: Designing for Human Rights in AI
- Authors: Evgeni Aizenberg and Jeroen van den Hoven
- Abstract summary: AI systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions.
It is becoming evident that these technological developments are consequential to people's fundamental human rights.
Technical solutions to these complex socio-ethical problems are often developed without empirical study of societal context.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the age of big data, companies and governments are increasingly using
algorithms to inform hiring decisions, employee management, policing, credit
scoring, insurance pricing, and many more aspects of our lives. AI systems can
help us make evidence-driven, efficient decisions, but can also confront us
with unjustified, discriminatory decisions wrongly assumed to be accurate
because they are made automatically and quantitatively. It is becoming evident
that these technological developments are consequential to people's fundamental
human rights. Despite increasing attention to these urgent challenges in recent
years, technical solutions to these complex socio-ethical problems are often
developed without empirical study of societal context and the critical input of
societal stakeholders who are impacted by the technology. On the other hand,
calls for more ethically- and socially-aware AI often fail to provide answers
for how to proceed beyond stressing the importance of transparency,
explainability, and fairness. Bridging these socio-technical gaps and the deep
divide between abstract value language and design requirements is essential to
facilitate nuanced, context-dependent design choices that will support moral
and social values. In this paper, we bridge this divide through the framework
of Design for Values, drawing on methodologies of Value Sensitive Design and
Participatory Design to present a roadmap for proactively engaging societal
stakeholders to translate fundamental human rights into context-dependent
design requirements through a structured, inclusive, and transparent process.
Related papers
- Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups [0.0]
This paper explores the relationship between capitalism, racial injustice, and artificial intelligence (AI)
It argues that AI acts as a contemporary vehicle for age-old forms of exploitation.
The paper promotes an approach that integrates social justice and equity into the core of technological design and policy.
arXiv Detail & Related papers (2024-03-10T22:40:07Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Responsible AI Implementation: A Human-centered Framework for
Accelerating the Innovation Process [0.8481798330936974]
This paper proposes a theoretical framework for responsible artificial intelligence (AI) implementation.
The proposed framework emphasizes a synergistic business technology approach for the agile co-creation process.
The framework emphasizes establishing and maintaining trust throughout the human-centered design and agile development of AI.
arXiv Detail & Related papers (2022-09-15T06:24:01Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Relational Artificial Intelligence [5.5586788751870175]
Even though AI is traditionally associated with rational decision making, understanding and shaping the societal impact of AI in all its facets requires a relational perspective.
A rational approach to AI, where computational algorithms drive decision making independent of human intervention, has shown to result in bias and exclusion.
A relational approach, that focus on the relational nature of things, is needed to deal with the ethical, legal, societal, cultural, and environmental implications of AI.
arXiv Detail & Related papers (2022-02-04T15:29:57Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Confronting Abusive Language Online: A Survey from the Ethical and Human
Rights Perspective [4.916009028580767]
We review a large body of NLP research on automatic abuse detection with a new focus on ethical challenges.
We highlight the need to examine the broad social impacts of this technology.
We identify several opportunities for rights-respecting, socio-technical solutions to detect and confront online abuse.
arXiv Detail & Related papers (2020-12-22T19:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.