Human-AI Collaboration via Conditional Delegation: A Case Study of
Content Moderation
- URL: http://arxiv.org/abs/2204.11788v1
- Date: Mon, 25 Apr 2022 17:00:02 GMT
- Title: Human-AI Collaboration via Conditional Delegation: A Case Study of
Content Moderation
- Authors: Vivian Lai, Samuel Carton, Rajat Bhatnagar, Q. Vera Liao, Yunfeng
Zhang, Chenhao Tan
- Abstract summary: We propose conditional delegation as an alternative paradigm for human-AI collaboration.
We develop novel interfaces to assist humans in creating conditional delegation rules.
Our study demonstrates the promise of conditional delegation in improving model performance.
- Score: 47.102566259034326
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite impressive performance in many benchmark datasets, AI models can
still make mistakes, especially among out-of-distribution examples. It remains
an open question how such imperfect models can be used effectively in
collaboration with humans. Prior work has focused on AI assistance that helps
people make individual high-stakes decisions, which is not scalable for a large
amount of relatively low-stakes decisions, e.g., moderating social media
comments. Instead, we propose conditional delegation as an alternative paradigm
for human-AI collaboration where humans create rules to indicate trustworthy
regions of a model. Using content moderation as a testbed, we develop novel
interfaces to assist humans in creating conditional delegation rules and
conduct a randomized experiment with two datasets to simulate in-distribution
and out-of-distribution scenarios. Our study demonstrates the promise of
conditional delegation in improving model performance and provides insights
into design for this novel paradigm, including the effect of AI explanations.
Related papers
- Exploring the Lands Between: A Method for Finding Differences between AI-Decisions and Human Ratings through Generated Samples [45.209635328908746]
We propose a method to find samples in the latent space of a generative model.
By presenting those samples to both the decision-making model and human raters, we can identify areas where its decisions align with human intuition.
We apply this method to a face recognition model and collect a dataset of 11,200 human ratings from 100 participants.
arXiv Detail & Related papers (2024-09-19T14:14:08Z) - SegXAL: Explainable Active Learning for Semantic Segmentation in Driving Scene Scenarios [1.2172320168050466]
We propose a novel Explainable Active Learning model, XAL-based semantic segmentation model "SegXAL"
SegXAL can (i) effectively utilize the unlabeled data, (ii) facilitate the "Human-in-the-loop" paradigm, and (iii) augment the model decisions in an interpretable way.
In particular, we investigate the application of the SegXAL model for semantic segmentation in driving scene scenarios.
arXiv Detail & Related papers (2024-08-08T14:19:11Z) - Secrets of RLHF in Large Language Models Part II: Reward Modeling [134.97964938009588]
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset.
We also introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses.
arXiv Detail & Related papers (2024-01-11T17:56:59Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Modeling Boundedly Rational Agents with Latent Inference Budgets [56.24971011281947]
We introduce a latent inference budget model (L-IBM) that models agents' computational constraints explicitly.
L-IBMs make it possible to learn agent models using data from diverse populations of suboptimal actors.
We show that L-IBMs match or outperform Boltzmann models of decision-making under uncertainty.
arXiv Detail & Related papers (2023-12-07T03:55:51Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Investigations of Performance and Bias in Human-AI Teamwork in Hiring [30.046502708053097]
In AI-assisted decision-making, effective hybrid teamwork (human-AI) is not solely dependent on AI performance alone.
We investigate how both a model's predictive performance and bias may transfer to humans in a recommendation-aided decision task.
arXiv Detail & Related papers (2022-02-21T17:58:07Z) - Paired Examples as Indirect Supervision in Latent Decision Models [109.76417071249945]
We introduce a way to leverage paired examples that provide stronger cues for learning latent decisions.
We apply our method to improve compositional question answering using neural module networks on the DROP dataset.
arXiv Detail & Related papers (2021-04-05T03:58:30Z) - Understanding the Effect of Out-of-distribution Examples and Interactive
Explanations on Human-AI Decision Making [19.157591744997355]
We argue that the typical experimental setup limits the potential of human-AI teams.
We develop novel interfaces to support interactive explanations so that humans can actively engage with AI assistance.
arXiv Detail & Related papers (2021-01-13T19:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.