Humans learn too: Better Human-AI Interaction using Optimized Human
Inputs
- URL: http://arxiv.org/abs/2009.09266v1
- Date: Sat, 19 Sep 2020 16:30:37 GMT
- Title: Humans learn too: Better Human-AI Interaction using Optimized Human
Inputs
- Authors: Johannes Schneider
- Abstract summary: Humans rely more and more on systems with AI components.
The AI community typically treats human inputs as a given and optimize AI models only.
In this work, human inputs are optimized for better interaction with an AI model while keeping the model fixed.
- Score: 2.5991265608180396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans rely more and more on systems with AI components. The AI community
typically treats human inputs as a given and optimizes AI models only. This
thinking is one-sided and it neglects the fact that humans can learn, too. In
this work, human inputs are optimized for better interaction with an AI model
while keeping the model fixed. The optimized inputs are accompanied by
instructions on how to create them. They allow humans to save time and cut on
errors, while keeping required changes to original inputs limited. We propose
continuous and discrete optimization methods modifying samples in an iterative
fashion. Our quantitative and qualitative evaluation including a human study on
different hand-generated inputs shows that the generated proposals lead to
lower error rates, require less effort to create and differ only modestly from
the original samples.
Related papers
- Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models [115.501751261878]
Fine-tuning language models(LMs) on human-generated data remains a prevalent practice.
We investigate whether we can go beyond human data on tasks where we have access to scalar feedback.
We find that ReST$EM$ scales favorably with model size and significantly surpasses fine-tuning only on human data.
arXiv Detail & Related papers (2023-12-11T18:17:43Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - Constitutional AI: Harmlessness from AI Feedback [19.964791766072132]
We experiment with methods for training a harmless AI assistant through self-improvement.
The only human oversight is provided through a list of rules or principles.
We are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them.
arXiv Detail & Related papers (2022-12-15T06:19:23Z) - Optimal Behavior Prior: Data-Efficient Human Models for Improved
Human-AI Collaboration [0.5524804393257919]
We show that using optimal behavior as a prior for human models makes these models vastly more data-efficient.
We also show that using these improved human models often leads to better human-AI collaboration performance.
arXiv Detail & Related papers (2022-11-03T06:10:22Z) - Humans are not Boltzmann Distributions: Challenges and Opportunities for
Modelling Human Feedback and Interaction in Reinforcement Learning [13.64577704565643]
We argue that these models are too simplistic and that RL researchers need to develop more realistic human models to design and evaluate their algorithms.
This paper calls for research from different disciplines to address key questions about how humans provide feedback to AIs and how we can build more robust human-in-the-loop RL systems.
arXiv Detail & Related papers (2022-06-27T13:58:51Z) - Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs
for Centaurs [22.52332536886295]
We present a novel formulation of the interaction between the human and the AI as a sequential game.
We show that in this case the AI's problem of helping bounded-rational humans make better decisions reduces to a Bayes-adaptive POMDP.
We discuss ways in which the machine can learn to improve upon its own limitations as well with the help of the human.
arXiv Detail & Related papers (2022-04-03T21:00:51Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Uncalibrated Models Can Improve Human-AI Collaboration [10.106324182884068]
We show that presenting AI models as more confident than they actually are can improve human-AI performance.
We first learn a model for how humans incorporate AI advice using data from thousands of human interactions.
arXiv Detail & Related papers (2022-02-12T04:51:00Z) - Skill Preferences: Learning to Extract and Execute Robotic Skills from
Human Feedback [82.96694147237113]
We present Skill Preferences, an algorithm that learns a model over human preferences and uses it to extract human-aligned skills from offline data.
We show that SkiP enables a simulated kitchen robot to solve complex multi-step manipulation tasks.
arXiv Detail & Related papers (2021-08-11T18:04:08Z) - Weak Human Preference Supervision For Deep Reinforcement Learning [48.03929962249475]
The current reward learning from human preferences could be used to resolve complex reinforcement learning (RL) tasks without access to a reward function.
We propose a weak human preference supervision framework, for which we developed a human preference scaling model.
Our established human-demonstration estimator requires human feedback only for less than 0.01% of the agent's interactions with the environment.
arXiv Detail & Related papers (2020-07-25T10:37:15Z) - Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork [54.309495231017344]
We argue that AI systems should be trained in a human-centered manner, directly optimized for team performance.
We study this proposal for a specific type of human-AI teaming, where the human overseer chooses to either accept the AI recommendation or solve the task themselves.
Our experiments with linear and non-linear models on real-world, high-stakes datasets show that the most accuracy AI may not lead to highest team performance.
arXiv Detail & Related papers (2020-04-27T19:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.