Learning to Complement Humans
- URL: http://arxiv.org/abs/2005.00582v1
- Date: Fri, 1 May 2020 20:00:23 GMT
- Title: Learning to Complement Humans
- Authors: Bryan Wilder, Eric Horvitz, Ece Kamar
- Abstract summary: A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
- Score: 67.38348247794949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A rising vision for AI in the open world centers on the development of
systems that can complement humans for perceptual, diagnostic, and reasoning
tasks. To date, systems aimed at complementing the skills of people have
employed models trained to be as accurate as possible in isolation. We
demonstrate how an end-to-end learning strategy can be harnessed to optimize
the combined performance of human-machine teams by considering the distinct
abilities of people and machines. The goal is to focus machine learning on
problem instances that are difficult for humans, while recognizing instances
that are difficult for the machine and seeking human input on them. We
demonstrate in two real-world domains (scientific discovery and medical
diagnosis) that human-machine teams built via these methods outperform the
individual performance of machines and people. We then analyze conditions under
which this complementarity is strongest, and which training methods amplify it.
Taken together, our work provides the first systematic investigation of how
machine learning systems can be trained to complement human reasoning.
Related papers
- CREW: Facilitating Human-AI Teaming Research [3.7324091969140776]
We introduce CREW, a platform to facilitate Human-AI teaming research and engage collaborations from multiple scientific disciplines.
It includes pre-built tasks for cognitive studies and Human-AI teaming with expandable potentials from our modular design.
CREW benchmarks real-time human-guided reinforcement learning agents using state-of-the-art algorithms and well-tuned baselines.
arXiv Detail & Related papers (2024-07-31T21:43:55Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Human in the AI loop via xAI and Active Learning for Visual Inspection [2.261815118231329]
Industrial revolutions have disrupted manufacturing by introducing automation into production.
Advances in robotics and artificial intelligence open new frontiers of human-machine collaboration.
The present work first describes Industry 5.0, human-machine collaboration, and state-of-the-art regarding quality inspection.
arXiv Detail & Related papers (2023-07-03T17:23:23Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Human Decision Makings on Curriculum Reinforcement Learning with
Difficulty Adjustment [52.07473934146584]
We guide the curriculum reinforcement learning results towards a preferred performance level that is neither too hard nor too easy via learning from the human decision process.
Our system is highly parallelizable, making it possible for a human to train large-scale reinforcement learning applications.
It shows reinforcement learning performance can successfully adjust in sync with the human desired difficulty level.
arXiv Detail & Related papers (2022-08-04T23:53:51Z) - A data-driven approach for learning to control computers [8.131261634438912]
We investigate the setting of computer control using keyboard and mouse, with goals specified via natural language.
We achieve state-of-the-art and human-level mean performance across all tasks within the MiniWob++ benchmark.
These results demonstrate the usefulness of a unified human-agent interface when training machines to use computers.
arXiv Detail & Related papers (2022-02-16T15:23:46Z) - Deep Interpretable Models of Theory of Mind For Human-Agent Teaming [0.7734726150561086]
We develop an interpretable modular neural framework for modeling the intentions of other observed entities.
We demonstrate the efficacy of our approach with experiments on data from human participants on a search and rescue task in Minecraft.
arXiv Detail & Related papers (2021-04-07T06:18:58Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.