Plug in the Safety Chip: Enforcing Constraints for LLM-driven Robot
Agents
- URL: http://arxiv.org/abs/2309.09919v3
- Date: Tue, 28 Nov 2023 07:08:29 GMT
- Title: Plug in the Safety Chip: Enforcing Constraints for LLM-driven Robot
Agents
- Authors: Ziyi Yang and Shreyas S. Raman and Ankit Shah and Stefanie Tellex
- Abstract summary: We propose a queryable safety constraint module based on linear temporal logic (LTL)
Our system strictly adheres to the safety constraints and scales well with complex safety constraints, highlighting its potential for practical utility.
- Score: 25.62431723307089
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in large language models (LLMs) have enabled a new
research domain, LLM agents, for solving robotics and planning tasks by
leveraging the world knowledge and general reasoning abilities of LLMs obtained
during pretraining. However, while considerable effort has been made to teach
the robot the "dos," the "don'ts" received relatively less attention. We argue
that, for any practical usage, it is as crucial to teach the robot the
"don'ts": conveying explicit instructions about prohibited actions, assessing
the robot's comprehension of these restrictions, and, most importantly,
ensuring compliance. Moreover, verifiable safe operation is essential for
deployments that satisfy worldwide standards such as ISO 61508, which defines
standards for safely deploying robots in industrial factory environments
worldwide. Aiming at deploying the LLM agents in a collaborative environment,
we propose a queryable safety constraint module based on linear temporal logic
(LTL) that simultaneously enables natural language (NL) to temporal constraints
encoding, safety violation reasoning and explaining, and unsafe action pruning.
To demonstrate the effectiveness of our system, we conducted experiments in
VirtualHome environment and on a real robot. The experimental results show that
our system strictly adheres to the safety constraints and scales well with
complex safety constraints, highlighting its potential for practical utility.
Related papers
- Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems [5.055705635181593]
Embodied AI systems, including AI-powered robots that autonomously interact with the physical world, stand to be significantly advanced.
Improper safety management can lead to failures in complex environments and make the system vulnerable to malicious command injections.
We propose textitSafeEmbodAI, a safety framework for integrating mobile robots into embodied AI systems.
arXiv Detail & Related papers (2024-09-03T05:56:50Z) - Robotic Control via Embodied Chain-of-Thought Reasoning [86.6680905262442]
Key limitation of learned robot control policies is their inability to generalize outside their training data.
Recent works on vision-language-action models (VLAs) have shown that the use of large, internet pre-trained vision-language models can substantially improve their robustness and generalization ability.
We introduce Embodied Chain-of-Thought Reasoning (ECoT) for VLAs, in which we train VLAs to perform multiple steps of reasoning about plans, sub-tasks, motions, and visually grounded features before predicting the robot action.
arXiv Detail & Related papers (2024-07-11T17:31:01Z) - ABNet: Attention BarrierNet for Safe and Scalable Robot Learning [58.4951884593569]
Barrier-based method is one of the dominant approaches for safe robot learning.
We propose Attention BarrierNet (ABNet) that is scalable to build larger foundational safe models in an incremental manner.
We demonstrate the strength of ABNet in 2D robot obstacle avoidance, safe robot manipulation, and vision-based end-to-end autonomous driving.
arXiv Detail & Related papers (2024-06-18T19:37:44Z) - Safety Control of Service Robots with LLMs and Embodied Knowledge Graphs [12.787160626087744]
We propose a novel integration of Large Language Models with Embodied Robotic Control Prompts (ERCPs) and Embodied Knowledge Graphs (EKGs)
ERCPs are designed as predefined instructions that ensure LLMs generate safe and precise responses.
EKGs provide a comprehensive knowledge base ensuring that the actions of the robot are continuously aligned with safety protocols.
arXiv Detail & Related papers (2024-05-28T05:50:25Z) - Safe Reinforcement Learning on the Constraint Manifold: Theory and Applications [21.98309272057848]
We show how we can impose complex safety constraints on learning-based robotics systems in a principled manner.
Our approach is based on the concept of the Constraint Manifold, representing the set of safe robot configurations.
We demonstrate the method's effectiveness in a real-world Robot Air Hockey task.
arXiv Detail & Related papers (2024-04-13T20:55:15Z) - Highlighting the Safety Concerns of Deploying LLMs/VLMs in Robotics [54.57914943017522]
We highlight the critical issues of robustness and safety associated with integrating large language models (LLMs) and vision-language models (VLMs) into robotics applications.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - InCoRo: In-Context Learning for Robotics Control with Feedback Loops [4.702566749969133]
InCoRo is a system that uses a classical robotic feedback loop composed of an LLM controller, a scene understanding unit, and a robot.
We highlight the generalization capabilities of our system and show that InCoRo surpasses the prior art in terms of the success rate.
This research paves the way towards building reliable, efficient, intelligent autonomous systems that adapt to dynamic environments.
arXiv Detail & Related papers (2024-02-07T19:01:11Z) - Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - A Multiplicative Value Function for Safe and Efficient Reinforcement
Learning [131.96501469927733]
We propose a safe model-free RL algorithm with a novel multiplicative value function consisting of a safety critic and a reward critic.
The safety critic predicts the probability of constraint violation and discounts the reward critic that only estimates constraint-free returns.
We evaluate our method in four safety-focused environments, including classical RL benchmarks augmented with safety constraints and robot navigation tasks with images and raw Lidar scans as observations.
arXiv Detail & Related papers (2023-03-07T18:29:15Z) - Safe reinforcement learning of dynamic high-dimensional robotic tasks:
navigation, manipulation, interaction [31.553783147007177]
In reinforcement learning, safety is even more fundamental for exploring an environment without causing any damage.
This paper introduces a new formulation of safe exploration for reinforcement learning of various robotic tasks.
Our approach applies to a wide class of robotic platforms and enforces safety even under complex collision constraints learned from data.
arXiv Detail & Related papers (2022-09-27T11:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.