A Human Rights-Based Approach to Responsible AI
- URL: http://arxiv.org/abs/2210.02667v1
- Date: Thu, 6 Oct 2022 04:07:53 GMT
- Title: A Human Rights-Based Approach to Responsible AI
- Authors: Vinodkumar Prabhakaran, Margaret Mitchell, Timnit Gebru, Iason Gabriel
- Abstract summary: We argue that a human rights framework orients the research in this space away from the machines and the risks of their biases, and towards humans and the risks to their rights.
- Score: 11.823731447853252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research on fairness, accountability, transparency and ethics of AI-based
interventions in society has gained much-needed momentum in recent years.
However it lacks an explicit alignment with a set of normative values and
principles that guide this research and interventions. Rather, an implicit
consensus is often assumed to hold for the values we impart into our models -
something that is at odds with the pluralistic world we live in. In this paper,
we put forth the doctrine of universal human rights as a set of globally
salient and cross-culturally recognized set of values that can serve as a
grounding framework for explicit value alignment in responsible AI - and
discuss its efficacy as a framework for civil society partnership and
participation. We argue that a human rights framework orients the research in
this space away from the machines and the risks of their biases, and towards
humans and the risks to their rights, essentially helping to center the
conversation around who is harmed, what harms they face, and how those harms
may be mitigated.
Related papers
- Technology as uncharted territory: Contextual integrity and the notion of AI as new ethical ground [55.2480439325792]
I argue that efforts to promote responsible and ethical AI can inadvertently contribute to and seemingly legitimize this disregard for established contextual norms.
I question the current narrow prioritization in AI ethics of moral innovation over moral preservation.
arXiv Detail & Related papers (2024-12-06T15:36:13Z) - ValueCompass: A Framework of Fundamental Values for Human-AI Alignment [15.35489011078817]
We introduce Value, a framework of fundamental values, grounded in psychological theory and a systematic review.
We apply Value to measure the value alignment of humans and language models (LMs) across four real-world vignettes.
Our findings uncover risky misalignment between humans and LMs, such as LMs agreeing with values like "Choose Own Goals", which are largely disagreed by humans.
arXiv Detail & Related papers (2024-09-15T02:13:03Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The Fairness Fair: Bringing Human Perception into Collective
Decision-Making [16.300744216179545]
We argue that not only fair solutions should be deemed desirable by social planners (designers), but they should be governed by human and societal cognition.
We discuss how achieving this goal requires a broad transdisciplinary approach ranging from computing and AI to behavioral economics and human-AI interaction.
arXiv Detail & Related papers (2023-12-22T03:06:24Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties [68.66719970507273]
Value pluralism is the view that multiple correct values may be held in tension with one another.
As statistical learners, AI systems fit to averages by default, washing out potentially irreducible value conflicts.
We introduce ValuePrism, a large-scale dataset of 218k values, rights, and duties connected to 31k human-written situations.
arXiv Detail & Related papers (2023-09-02T01:24:59Z) - Is the U.S. Legal System Ready for AI's Challenges to Human Values? [16.510834081597377]
This study investigates how effectively U.S. laws confront the challenges posed by Generative AI to human values.
We identify notable gaps and uncertainties within the existing legal framework regarding the protection of fundamental values.
We advocate for legal frameworks that evolve to recognize new threats and provide proactive, auditable guidelines to industry stakeholders.
arXiv Detail & Related papers (2023-08-30T09:19:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Tensions Between the Proxies of Human Values in AI [20.303537771118048]
We argue that the AI community needs to consider all the consequences of choosing certain formulations of these pillars.
We point towards sociotechnical research for frameworks for the latter, but push for broader efforts into implementing these in practice.
arXiv Detail & Related papers (2022-12-14T21:13:48Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Toward a Theory of Justice for Artificial Intelligence [2.28438857884398]
It holds that the basic structure of society should be understood as a composite of socio-technical systems.
As a consequence, egalitarian norms of justice apply to the technology when it is deployed in these contexts.
arXiv Detail & Related papers (2021-10-27T13:23:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.