FAIRO: Fairness-aware Adaptation in Sequential-Decision Making for
Human-in-the-Loop Systems
- URL: http://arxiv.org/abs/2307.05857v2
- Date: Mon, 6 Nov 2023 19:20:14 GMT
- Title: FAIRO: Fairness-aware Adaptation in Sequential-Decision Making for
Human-in-the-Loop Systems
- Authors: Tianyu Zhao, Mojtaba Taherisadr, Salma Elmalaki
- Abstract summary: We propose a novel algorithm for fairness-aware sequential-decision making in Human-in-the-Loop (HITL) adaptation.
In particular, FAIRO decomposes this complex fairness task into adaptive sub-tasks based on individual human preferences.
We show that FAIRO can improve fairness compared with other methods across all three applications by 35.36%.
- Score: 8.713442325649801
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Achieving fairness in sequential-decision making systems within
Human-in-the-Loop (HITL) environments is a critical concern, especially when
multiple humans with different behavior and expectations are affected by the
same adaptation decisions in the system. This human variability factor adds
more complexity since policies deemed fair at one point in time may become
discriminatory over time due to variations in human preferences resulting from
inter- and intra-human variability. This paper addresses the fairness problem
from an equity lens, considering human behavior variability, and the changes in
human preferences over time. We propose FAIRO, a novel algorithm for
fairness-aware sequential-decision making in HITL adaptation, which
incorporates these notions into the decision-making process. In particular,
FAIRO decomposes this complex fairness task into adaptive sub-tasks based on
individual human preferences through leveraging the Options reinforcement
learning framework. We design FAIRO to generalize to three types of HITL
application setups that have the shared adaptation decision problem.
Furthermore, we recognize that fairness-aware policies can sometimes conflict
with the application's utility. To address this challenge, we provide a
fairness-utility tradeoff in FAIRO, allowing system designers to balance the
objectives of fairness and utility based on specific application requirements.
Extensive evaluations of FAIRO on the three HITL applications demonstrate its
generalizability and effectiveness in promoting fairness while accounting for
human variability. On average, FAIRO can improve fairness compared with other
methods across all three applications by 35.36%.
Related papers
- The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - Fairness in Algorithmic Recourse Through the Lens of Substantive
Equality of Opportunity [15.78130132380848]
Algorithmic recourse has gained attention as a means of giving persons agency in their interactions with AI systems.
Recent work has shown that recourse itself may be unfair due to differences in the initial circumstances of individuals.
Time is a critical element in recourse because the longer it takes an individual to act, the more the setting may change.
arXiv Detail & Related papers (2024-01-29T11:55:45Z) - "One-Size-Fits-All"? Examining Expectations around What Constitute "Fair" or "Good" NLG System Behaviors [57.63649797577999]
We conduct case studies in which we perturb different types of identity-related language features (names, roles, locations, dialect, and style) in NLG system inputs.
We find that motivations for adaptation include social norms, cultural differences, feature-specific information, and accommodation.
In contrast, motivations for invariance include perspectives that favor prescriptivism, view adaptation as unnecessary or too difficult for NLG systems to do appropriately, and are wary of false assumptions.
arXiv Detail & Related papers (2023-10-23T23:00:34Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Social Diversity Reduces the Complexity and Cost of Fostering Fairness [63.70639083665108]
We investigate the effects of interference mechanisms which assume incomplete information and flexible standards of fairness.
We quantify the role of diversity and show how it reduces the need for information gathering.
Our results indicate that diversity changes and opens up novel mechanisms available to institutions wishing to promote fairness.
arXiv Detail & Related papers (2022-11-18T21:58:35Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Addressing Fairness, Bias and Class Imbalance in Machine Learning: the
FBI-loss [11.291571222801027]
We propose a unified loss correction to address issues related to Fairness, Biases and Imbalances (FBI-loss)
The correction capabilities of the proposed approach are assessed on three real-world benchmarks.
arXiv Detail & Related papers (2021-05-13T15:01:14Z) - FaiR-IoT: Fairness-aware Human-in-the-Loop Reinforcement Learning for
Harnessing Human Variability in Personalized IoT [0.0]
FaiR-IoT is a reinforcement learning-based framework for adaptive and fairness-aware human-in-the-loop IoT applications.
We validate the proposed framework on two applications, namely (i) Human-in-the-Loop Automotive Advanced Driver Assistance Systems and (ii) Human-in-the-Loop Smart House.
Results obtained on these two applications validate the generality of FaiR-IoT and its ability to provide a personalized experience.
arXiv Detail & Related papers (2021-03-30T02:30:25Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.