Human-Robot Red Teaming for Safety-Aware Reasoning
- URL: http://arxiv.org/abs/2508.01129v1
- Date: Sat, 02 Aug 2025 00:55:09 GMT
- Title: Human-Robot Red Teaming for Safety-Aware Reasoning
- Authors: Emily Sheetz, Emma Zemler, Misha Savchenko, Connor Rainen, Erik Holum, Jodi Graf, Andrew Albright, Shaun Azimi, Benjamin Kuipers,
- Abstract summary: We propose the human-robot red teaming paradigm for safety-aware reasoning.<n>We expect humans and robots to work together to challenge assumptions about an environment and explore the space of hazards that may arise.<n>This exploration will enable robots to perform safety-aware reasoning, specifically hazard identification, risk assessment, risk mitigation, and safety reporting.
- Score: 1.3060095849496556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While much research explores improving robot capabilities, there is a deficit in researching how robots are expected to perform tasks safely, especially in high-risk problem domains. Robots must earn the trust of human operators in order to be effective collaborators in safety-critical tasks, specifically those where robots operate in human environments. We propose the human-robot red teaming paradigm for safety-aware reasoning. We expect humans and robots to work together to challenge assumptions about an environment and explore the space of hazards that may arise. This exploration will enable robots to perform safety-aware reasoning, specifically hazard identification, risk assessment, risk mitigation, and safety reporting. We demonstrate that: (a) human-robot red teaming allows human-robot teams to plan to perform tasks safely in a variety of domains, and (b) robots with different embodiments can learn to operate safely in two different environments -- a lunar habitat and a household -- with varying definitions of safety. Taken together, our work on human-robot red teaming for safety-aware reasoning demonstrates the feasibility of this approach for safely operating and promoting trust on human-robot teams in safety-critical problem domains.
Related papers
- A roadmap for AI in robotics [55.87087746398059]
We are witnessing growing excitement in robotics at the prospect of leveraging the potential of AI to tackle some of the outstanding barriers to the full deployment of robots in our daily lives.<n>This article offers an assessment of what AI for robotics has achieved since the 1990s and proposes a short- and medium-term research roadmap listing challenges and promises.
arXiv Detail & Related papers (2025-07-26T15:18:28Z) - Don't Let Your Robot be Harmful: Responsible Robotic Manipulation via Safety-as-Policy [53.048430683355804]
Unthinking execution of human instructions in robotic manipulation can lead to severe safety risks.<n>We present Safety-as-policy, which includes (i) a world model to automatically generate scenarios containing safety risks and conduct virtual interactions, and (ii) a mental model to infer consequences with reflections.<n>We show that Safety-as-policy can avoid risks and efficiently complete tasks in both synthetic dataset and real-world experiments.
arXiv Detail & Related papers (2024-11-27T12:27:50Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - The dynamic nature of trust: Trust in Human-Robot Interaction revisited [0.38233569758620045]
Socially assistive robots (SARs) assist humans in the real world.
Risk introduces an element of trust, so understanding human trust in the robot is imperative.
arXiv Detail & Related papers (2023-03-08T19:20:11Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Vision-Based Safety System for Barrierless Human-Robot Collaboration [0.0]
This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation.
A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot.
Three different operation modes in which the human and robot interact are presented.
arXiv Detail & Related papers (2022-08-03T12:31:03Z) - A Review on Trust in Human-Robot Interaction [0.0]
A new field of research in human-robot interaction, namely human-robot trust, is emerging.
This paper reviews the past works on human-robot trust based on the research topics and discuss selected trends in this field.
arXiv Detail & Related papers (2021-05-20T21:50:03Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Motion Planning Combines Psychological Safety and Motion Prediction for
a Sense Motive Robot [2.14239637027446]
This paper addresses the human safety issue by covering both the physical safety and psychological safety aspects.
First, we introduce an adaptive robot velocity control and step size adjustment method according to human facial expressions, such that the robot can adjust its movement to keep safety when the human emotion is unusual.
Second, we predict the human motion by detecting the suddenly changes of human head pose and gaze direction, such that the robot can infer whether the human attention is distracted, predict the next move of human and rebuild a repulsive force to avoid potential collision.
arXiv Detail & Related papers (2020-09-29T04:19:53Z) - Supportive Actions for Manipulation in Human-Robot Coworker Teams [15.978389978586414]
We term actions that support interaction by reducing future interference with others as supportive robot actions.
We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones.
Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task.
arXiv Detail & Related papers (2020-05-02T09:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.