Respect for Human Autonomy in Recommender Systems
- URL: http://arxiv.org/abs/2009.02603v1
- Date: Sat, 5 Sep 2020 21:39:34 GMT
- Title: Respect for Human Autonomy in Recommender Systems
- Authors: Lav R. Varshney
- Abstract summary: Many ethical systems point to respect for human autonomy as a key principle arising from human rights considerations.
No specific formalization has been defined.
We argue that there is a need to specifically operationalize respect for human autonomy in the context of recommender systems.
- Score: 24.633323508534254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems can influence human behavior in significant ways, in some
cases making people more machine-like. In this sense, recommender systems may
be deleterious to notions of human autonomy. Many ethical systems point to
respect for human autonomy as a key principle arising from human rights
considerations, and several emerging frameworks for AI include this principle.
Yet, no specific formalization has been defined. Separately, self-determination
theory shows that autonomy is an innate psychological need for people, and
moreover has a significant body of experimental work that formalizes and
measures level of human autonomy. In this position paper, we argue that there
is a need to specifically operationalize respect for human autonomy in the
context of recommender systems. Moreover, that such an operational definition
can be developed based on well-established approaches from experimental
psychology, which can then be used to design future recommender systems that
respect human autonomy.
Related papers
- Aligning Generalisation Between Humans and Machines [74.120848518198]
Recent advances in AI have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals.
The responsible use of AI increasingly shows the need for human-AI teaming.
A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise.
arXiv Detail & Related papers (2024-11-23T18:36:07Z) - A Measure for Level of Autonomy Based on Observable System Behavior [0.0]
We present a potential measure for predicting level of autonomy using observable actions.
We also present an algorithm incorporating the proposed measure.
The measure and algorithm have significance to researchers and practitioners interested in a method to blind compare autonomous systems at runtime.
arXiv Detail & Related papers (2024-07-20T20:34:20Z) - ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Measuring Value Alignment [12.696227679697493]
This paper introduces a novel formalism to quantify the alignment between AI systems and human values.
By utilizing this formalism, AI developers and ethicists can better design and evaluate AI systems to ensure they operate in harmony with human values.
arXiv Detail & Related papers (2023-12-23T12:30:06Z) - Reflective Hybrid Intelligence for Meaningful Human Control in
Decision-Support Systems [4.1454448964078585]
We introduce the notion of self-reflective AI systems for meaningful human control over AI systems.
We propose a framework that integrates knowledge from psychology and philosophy with formal reasoning methods and machine learning approaches.
We argue that self-reflective AI systems can lead to self-reflective hybrid systems (human + AI)
arXiv Detail & Related papers (2023-07-12T13:32:24Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Meaningful human control over AI systems: beyond talking the talk [8.351027101823705]
We identify four properties which AI-based systems must have to be under meaningful human control.
First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations.
Second, humans and AI agents within the system should have appropriate and mutually compatible representations.
Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system.
arXiv Detail & Related papers (2021-11-25T11:05:37Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z) - Regulating human control over autonomous systems [1.2691047660244335]
It is argued that the use of increasingly autonomous systems should be guided by the policy of human control.
This article explores the notion of human control in the United States in the two domains of defense and transportation.
arXiv Detail & Related papers (2020-07-22T06:05:41Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.