Towards Involving End-users in Interactive Human-in-the-loop AI Fairness
- URL: http://arxiv.org/abs/2204.10464v1
- Date: Fri, 22 Apr 2022 02:24:11 GMT
- Title: Towards Involving End-users in Interactive Human-in-the-loop AI Fairness
- Authors: Yuri Nakao, Simone Stumpf, Subeida Ahmed, Aisha Naseer and Lorenzo
Strappelli
- Abstract summary: Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications.
Recent work has started to investigate how humans judge fairness and how to support machine learning (ML) experts in making their AI models fairer.
Our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users to identify potential fairness issues.
- Score: 1.889930012459365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring fairness in artificial intelligence (AI) is important to counteract
bias and discrimination in far-reaching applications. Recent work has started
to investigate how humans judge fairness and how to support machine learning
(ML) experts in making their AI models fairer. Drawing inspiration from an
Explainable AI (XAI) approach called \emph{explanatory debugging} used in
interactive machine learning, our work explores designing interpretable and
interactive human-in-the-loop interfaces that allow ordinary end-users without
any technical or domain background to identify potential fairness issues and
possibly fix them in the context of loan decisions. Through workshops with
end-users, we co-designed and implemented a prototype system that allowed
end-users to see why predictions were made, and then to change weights on
features to "debug" fairness issues. We evaluated the use of this prototype
system through an online study. To investigate the implications of diverse
human values about fairness around the globe, we also explored how cultural
dimensions might play a role in using this prototype. Our results contribute to
the design of interfaces to allow end-users to be involved in judging and
addressing AI fairness through a human-in-the-loop approach.
Related papers
- Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI [20.21053807133341]
We try to provide an account of what constitutes a human-aware AI system.
We see that human-aware AI is a design oriented paradigm, one that focuses on the need for modeling the humans it may interact with.
arXiv Detail & Related papers (2024-05-13T14:17:52Z) - Fiper: a Visual-based Explanation Combining Rules and Feature Importance [3.2982707161882967]
Explainable Artificial Intelligence aims to design tools and techniques to illustrate the predictions of the so-called black-box algorithms.
This paper proposes a visual-based method to illustrate rules paired with feature importance.
arXiv Detail & Related papers (2024-04-25T09:15:54Z) - FairCompass: Operationalising Fairness in Machine Learning [34.964477625987136]
There is a growing imperative to develop responsible AI solutions.
Despite a diverse assortment of machine learning fairness solutions is proposed in the literature.
There is reportedly a lack of practical implementation of these tools in real-world applications.
arXiv Detail & Related papers (2023-12-27T21:29:53Z) - Combatting Human Trafficking in the Cyberspace: A Natural Language
Processing-Based Methodology to Analyze the Language in Online Advertisements [55.2480439325792]
This project tackles the pressing issue of human trafficking in online C2C marketplaces through advanced Natural Language Processing (NLP) techniques.
We introduce a novel methodology for generating pseudo-labeled datasets with minimal supervision, serving as a rich resource for training state-of-the-art NLP models.
A key contribution is the implementation of an interpretability framework using Integrated Gradients, providing explainable insights crucial for law enforcement.
arXiv Detail & Related papers (2023-11-22T02:45:01Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Towards Fair and Explainable AI using a Human-Centered AI Approach [5.888646114353372]
We present 5 research projects that aim to enhance explainability and fairness in classification systems and word embeddings.
The first project explores the utility/downsides of introducing local model explanations as interfaces for machine teachers.
The second project presents D-BIAS, a causality-based human-in-the-loop visual tool for identifying and mitigating social biases in datasets.
The third project presents WordBias, a visual interactive tool that helps audit pre-trained static word embeddings for biases against groups.
The fourth project presents DramatVis Personae, a visual analytics tool that helps identify social
arXiv Detail & Related papers (2023-06-12T21:08:55Z) - Visual Affordance Prediction for Guiding Robot Exploration [56.17795036091848]
We develop an approach for learning visual affordances for guiding robot exploration.
We use a Transformer-based model to learn a conditional distribution in the latent embedding space of a VQ-VAE.
We show how the trained affordance model can be used for guiding exploration by acting as a goal-sampling distribution, during visual goal-conditioned policy learning in robotic manipulation.
arXiv Detail & Related papers (2023-05-28T17:53:09Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making [19.157591744997355]
We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
arXiv Detail & Related papers (2021-01-13T19:01:32Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.