Regulating human control over autonomous systems
- URL: http://arxiv.org/abs/2007.11218v1
- Date: Wed, 22 Jul 2020 06:05:41 GMT
- Title: Regulating human control over autonomous systems
- Authors: Mikolaj firlej, Araz Taeihagh
- Abstract summary: It is argued that the use of increasingly autonomous systems should be guided by the policy of human control.
This article explores the notion of human control in the United States in the two domains of defense and transportation.
- Score: 1.2691047660244335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, many sectors have experienced significant progress in
automation, associated with the growing advances in artificial intelligence and
machine learning. There are already automated robotic weapons, which are able
to evaluate and engage with targets on their own, and there are already
autonomous vehicles that do not need a human driver. It is argued that the use
of increasingly autonomous systems (AS) should be guided by the policy of human
control, according to which humans should execute a certain significant level
of judgment over AS. While in the military sector there is a fear that AS could
mean that humans lose control over life and death decisions, in the
transportation domain, on the contrary, there is a strongly held view that
autonomy could bring significant operational benefits by removing the need for
a human driver. This article explores the notion of human control in the United
States in the two domains of defense and transportation. The operationalization
of emerging policies of human control results in the typology of direct and
indirect human controls exercised over the use of AS. The typology helps to
steer the debate away from the linguistic complexities of the term autonomy. It
identifies instead where human factors are undergoing important changes and
ultimately informs about more detailed rules and standards formulation, which
differ across domains, applications, and sectors.
Related papers
- Robotic Control via Embodied Chain-of-Thought Reasoning [86.6680905262442]
Key limitation of learned robot control policies is their inability to generalize outside their training data.
Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models can substantially improve their robustness and generalization ability.
We introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features before predicting the robot action.
arXiv Detail & Related papers (2024-07-11T17:31:01Z) - Commonsense Reasoning for Legged Robot Adaptation with Vision-Language Models [81.55156507635286]
Legged robots are physically capable of navigating a diverse variety of environments and overcoming a wide range of obstructions.
Current learning methods often struggle with generalization to the long tail of unexpected situations without heavy human supervision.
We propose a system, VLM-Predictive Control (VLM-PC), combining two key components that we find to be crucial for eliciting on-the-fly, adaptive behavior selection.
arXiv Detail & Related papers (2024-07-02T21:00:30Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A Language Agent for Autonomous Driving [31.359413767191608]
We propose a paradigm shift to integrate human-like intelligence into autonomous driving systems.
Our approach, termed Agent-Driver, transforms the traditional autonomous driving pipeline by introducing a versatile tool library.
Powered by Large Language Models (LLMs), our Agent-Driver is endowed with intuitive common sense and robust reasoning capabilities.
arXiv Detail & Related papers (2023-11-17T18:59:56Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Exploring AI-enhanced Shared Control for an Assistive Robotic Arm [4.999814847776098]
In particular, we explore how Artifical Intelligence (AI) can be integrated into a shared control paradigm.
In particular, we focus on the consequential requirements for the interface between human and robot.
arXiv Detail & Related papers (2023-06-23T14:19:56Z) - "No, to the Right" -- Online Language Corrections for Robotic
Manipulation via Shared Autonomy [70.45420918526926]
We present LILAC, a framework for incorporating and adapting to natural language corrections online during execution.
Instead of discrete turn-taking between a human and robot, LILAC splits agency between the human and robot.
We show that our corrections-aware approach obtains higher task completion rates, and is subjectively preferred by users.
arXiv Detail & Related papers (2023-01-06T15:03:27Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Meaningful human control over AI systems: beyond talking the talk [8.351027101823705]
We identify four properties which AI-based systems must have to be under meaningful human control.
First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations.
Second, humans and AI agents within the system should have appropriate and mutually compatible representations.
Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system.
arXiv Detail & Related papers (2021-11-25T11:05:37Z) - Drivers' Manoeuvre Modelling and Prediction for Safe HRI [0.0]
Theory of Mind has been broadly explored for robotics and recently for autonomous and semi-autonomous vehicles.
We explored how to predict human intentions before an action is performed by combining data from human-motion, vehicle-state and human inputs.
arXiv Detail & Related papers (2021-06-03T10:07:55Z) - Respect for Human Autonomy in Recommender Systems [24.633323508534254]
Many ethical systems point to respect for human autonomy as a key principle arising from human rights considerations.
No specific formalization has been defined.
We argue that there is a need to specifically operationalize respect for human autonomy in the context of recommender systems.
arXiv Detail & Related papers (2020-09-05T21:39:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.