Chemist Eye: A Visual Language Model-Powered System for Safety Monitoring and Robot Decision-Making in Self-Driving Laboratories
- URL: http://arxiv.org/abs/2508.05148v1
- Date: Thu, 07 Aug 2025 08:31:42 GMT
- Title: Chemist Eye: A Visual Language Model-Powered System for Safety Monitoring and Robot Decision-Making in Self-Driving Laboratories
- Authors: Francisco Munguia-Galeano, Zhengxue Zhou, Satheeshkumar Veeramani, Hatem Fakhruldeen, Louis Longley, Rob Clowes, Andrew I. Cooper,
- Abstract summary: The integration of robotics and automation into self-driving laboratories (SDLs) can introduce additional safety complexities.<n>Here, we present Chemist Eye, a distributed safety monitoring system designed to enhance situational awareness in SDLs.<n>The system integrates multiple stations equipped with RGB, depth, and infrared cameras, designed to monitor incidents in SDLs.
- Score: 3.1567913519981423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The integration of robotics and automation into self-driving laboratories (SDLs) can introduce additional safety complexities, in addition to those that already apply to conventional research laboratories. Personal protective equipment (PPE) is an essential requirement for ensuring the safety and well-being of workers in laboratories, self-driving or otherwise. Fires are another important risk factor in chemical laboratories. In SDLs, fires that occur close to mobile robots, which use flammable lithium batteries, could have increased severity. Here, we present Chemist Eye, a distributed safety monitoring system designed to enhance situational awareness in SDLs. The system integrates multiple stations equipped with RGB, depth, and infrared cameras, designed to monitor incidents in SDLs. Chemist Eye is also designed to spot workers who have suffered a potential accident or medical emergency, PPE compliance and fire hazards. To do this, Chemist Eye uses decision-making driven by a vision-language model (VLM). Chemist Eye is designed for seamless integration, enabling real-time communication with robots. Based on the VLM recommendations, the system attempts to drive mobile robots away from potential fire locations, exits, or individuals not wearing PPE, and issues audible warnings where necessary. It also integrates with third-party messaging platforms to provide instant notifications to lab personnel. We tested Chemist Eye with real-world data from an SDL equipped with three mobile robots and found that the spotting of possible safety hazards and decision-making performances reached 97 % and 95 %, respectively.
Related papers
- AGENTSAFE: Benchmarking the Safety of Embodied Agents on Hazardous Instructions [76.74726258534142]
We propose AGENTSAFE, the first benchmark for evaluating the safety of embodied VLM agents under hazardous instructions.<n> AGENTSAFE simulates realistic agent-environment interactions within a simulation sandbox.<n> benchmark includes 45 adversarial scenarios, 1,350 hazardous tasks, and 8,100 hazardous instructions.
arXiv Detail & Related papers (2025-06-17T16:37:35Z) - Safety Guardrails for LLM-Enabled Robots [82.0459036717193]
Traditional robot safety approaches do not address the novel vulnerabilities of large language models (LLMs)<n>We propose RoboGuard, a two-stage guardrail architecture to ensure the safety of LLM-enabled robots.<n>We show that RoboGuard reduces the execution of unsafe plans from 92% to below 2.5% without compromising performance on safe plans.
arXiv Detail & Related papers (2025-03-10T22:01:56Z) - Don't Let Your Robot be Harmful: Responsible Robotic Manipulation via Safety-as-Policy [53.048430683355804]
Unthinking execution of human instructions in robotic manipulation can lead to severe safety risks.<n>We present Safety-as-policy, which includes (i) a world model to automatically generate scenarios containing safety risks and conduct virtual interactions, and (ii) a mental model to infer consequences with reflections.<n>We show that Safety-as-policy can avoid risks and efficiently complete tasks in both synthetic dataset and real-world experiments.
arXiv Detail & Related papers (2024-11-27T12:27:50Z) - Defining and Evaluating Physical Safety for Large Language Models [62.4971588282174]
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones.
Their risks of causing physical threats and harm in real-world applications remain unexplored.
We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations.
arXiv Detail & Related papers (2024-11-04T17:41:25Z) - On the Vulnerability of LLM/VLM-Controlled Robotics [54.57914943017522]
We highlight vulnerabilities in robotic systems integrating large language models (LLMs) and vision-language models (VLMs) due to input modality sensitivities.<n>Our results show that simple input perturbations reduce task execution success rates by 22.2% and 14.6% in two representative LLM/VLM-controlled robotic systems.
arXiv Detail & Related papers (2024-02-15T22:01:45Z) - Chemist-X: Large Language Model-empowered Agent for Reaction Condition Recommendation in Chemical Synthesis [55.30328162764292]
Chemist-X is a comprehensive AI agent that automates the reaction condition optimization (RCO) task in chemical synthesis.<n>The agent uses retrieval-augmented generation (RAG) technology and AI-controlled wet-lab experiment executions.<n>Results of our automatic wet-lab experiments, achieved by fully LLM-supervised end-to-end operation with no human in the lope, prove Chemist-X's ability in self-driving laboratories.
arXiv Detail & Related papers (2023-11-16T01:21:33Z) - Plug in the Safety Chip: Enforcing Constraints for LLM-driven Robot
Agents [25.62431723307089]
We propose a queryable safety constraint module based on linear temporal logic (LTL)
Our system strictly adheres to the safety constraints and scales well with complex safety constraints, highlighting its potential for practical utility.
arXiv Detail & Related papers (2023-09-18T16:33:30Z) - A Smart Robotic System for Industrial Plant Supervision [16.68349850187503]
We present a system consisting of an autonomously navigating robot integrated with various sensors and intelligent data processing.
It is able to detect methane leaks and estimate its flow rate, detect more general gas anomalies, localize sound sources and detect failure cases, map the environment in 3D, and navigate autonomously.
arXiv Detail & Related papers (2023-08-10T14:54:21Z) - Improving safety in physical human-robot collaboration via deep metric
learning [36.28667896565093]
Direct physical interaction with robots is becoming increasingly important in flexible production scenarios.
In order to keep the risk potential low, relatively simple measures are prescribed for operation, such as stopping the robot if there is physical contact or if a safety distance is violated.
This work uses the Deep Metric Learning (DML) approach to distinguish between non-contact robot movement, intentional contact aimed at physical human-robot interaction, and collision situations.
arXiv Detail & Related papers (2023-02-23T11:26:51Z) - Visual Detection of Personal Protective Equipment and Safety Gear on
Industry Workers [49.36909714011171]
We develop a system that will improve workers' safety using a camera that will detect the usage of Personal Protective Equipment (PPE)
Our focus is to implement our system into an entry control point where workers must present themselves to obtain access to a restricted area.
A novelty of this work is that we increase the number of classes to five objects (hardhat, safety vest, safety gloves, safety glasses, and hearing protection)
arXiv Detail & Related papers (2022-12-09T11:50:03Z) - Vision-Based Safety System for Barrierless Human-Robot Collaboration [0.0]
This paper proposes a safety system that implements Speed and Separation Monitoring (SSM) type of operation.
A deep learning-based computer vision system detects, tracks, and estimates the 3D position of operators close to the robot.
Three different operation modes in which the human and robot interact are presented.
arXiv Detail & Related papers (2022-08-03T12:31:03Z) - Wearable camera-based human absolute localization in large warehouses [0.0]
This paper introduces a wearable human localization system for large warehouses.
A monocular down-looking camera is detecting ground nodes, identifying them and computing the absolute position of the human.
A virtual safety area around the human operator is set up and any AGV in this area is immediately stopped.
arXiv Detail & Related papers (2020-07-20T12:57:37Z) - Lio -- A Personal Robot Assistant for Human-Robot Interaction and Care
Applications [0.35390706902408026]
Lio is a mobile robot platform with a multi-functional arm explicitly designed for human-robot interaction and personal care assistant tasks.
Lio is intrinsically safe by having full coverage in soft artificial-leather material as well as having collision detection, limited speed and forces.
During the COVID-19 pandemic, Lio was rapidly adjusted to perform additional functionality like disinfection and remote elevated body temperature detection.
arXiv Detail & Related papers (2020-06-16T09:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.