Give me scissors: Collision-Free Dual-Arm Surgical Assistive Robot for Instrument Delivery
- URL: http://arxiv.org/abs/2603.02553v1
- Date: Tue, 03 Mar 2026 03:14:59 GMT
- Title: Give me scissors: Collision-Free Dual-Arm Surgical Assistive Robot for Instrument Delivery
- Authors: Xuejin Luo, Shiquan Sun, Runshi Zhang, Ruizhi Zhang, Junchen Wang,
- Abstract summary: During surgery, scrub nurses are required to frequently deliver surgical instruments to surgeons.<n>Existing research on robotic scrub nurses relies on predefined paths for instrument delivery.<n>We present a collision-free dual-arm surgical assistive robot capable of performing instrument delivery.
- Score: 4.187693745535412
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: During surgery, scrub nurses are required to frequently deliver surgical instruments to surgeons, which can lead to physical fatigue and decreased focus. Robotic scrub nurses provide a promising solution that can replace repetitive tasks and enhance efficiency. Existing research on robotic scrub nurses relies on predefined paths for instrument delivery, which limits their generalizability and poses safety risks in dynamic environments. To address these challenges, we present a collision-free dual-arm surgical assistive robot capable of performing instrument delivery. A vision-language model is utilized to automatically generate the robot's grasping and delivery trajectories in a zero-shot manner based on surgeons' instructions. A real-time obstacle minimum distance perception method is proposed and integrated into a unified quadratic programming framework. This framework ensures reactive obstacle avoidance and self-collision prevention during the dual-arm robot's autonomous movement in dynamic environments. Extensive experimental validations demonstrate that the proposed robotic system achieves an 83.33% success rate in surgical instrument delivery while maintaining smooth, collision-free movement throughout all trials. The project page and source code are available at https://give-me-scissors.github.io/.
Related papers
- CHIP: Adaptive Compliance for Humanoid Control through Hindsight Perturbation [70.5382178207975]
hIsight Perturbation (CHIP) is a plug-and-play module that enables controllable end-effector stiffness.<n>CHIP is easy to implement and requires neither data augmentation nor additional reward tuning.<n>We show that a generalist motion-tracking controller trained with CHIP can perform a diverse set of forceful manipulation tasks.
arXiv Detail & Related papers (2025-12-16T18:56:04Z) - Localising under the drape: proprioception in the era of distributed surgical robotic system [12.001086860486906]
We present a marker-free proprioception method that enables precise localisation of surgical robots under their sterile draping.<n>Our method relies on lightweight stereo-RGB cameras and novel transformer-based deep learning models.<n>It builds on the largest multi-centre spatial robotic surgery dataset to date.
arXiv Detail & Related papers (2025-10-27T16:50:12Z) - Hysteresis-Aware Neural Network Modeling and Whole-Body Reinforcement Learning Control of Soft Robots [14.02771001060961]
We present a soft robotic system designed for surgical applications.<n>We propose a whole-body neural network model that accurately captures and predicts the soft robot's whole-body motion.<n>The proposed method showed strong performance in phantom-based surgical experiments.
arXiv Detail & Related papers (2025-04-18T09:34:56Z) - VidBot: Learning Generalizable 3D Actions from In-the-Wild 2D Human Videos for Zero-Shot Robotic Manipulation [53.63540587160549]
VidBot is a framework enabling zero-shot robotic manipulation using learned 3D affordance from in-the-wild monocular RGB-only human videos.<n> VidBot paves the way for leveraging everyday human videos to make robot learning more scalable.
arXiv Detail & Related papers (2025-03-10T10:04:58Z) - General-purpose foundation models for increased autonomy in
robot-assisted surgery [4.155479231940454]
This perspective article aims to provide a path toward increasing robot autonomy in robot-assisted surgery.
We argue that surgical robots are uniquely positioned to benefit from general-purpose models and provide three guiding actions toward increased autonomy in robot-assisted surgery.
arXiv Detail & Related papers (2024-01-01T06:15:16Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Learning needle insertion from sample task executions [0.0]
The data of robotic surgery can be easily logged where the collected data can be used to learn task models.
We present a needle insertion dataset including 60 successful trials recorded by 3 pair of stereo cameras.
We also present Deep-robot Learning from Demonstrations that predicts the desired state of the robot at the time step after t.
arXiv Detail & Related papers (2021-03-14T14:23:17Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Using Conditional Generative Adversarial Networks to Reduce the Effects
of Latency in Robotic Telesurgery [0.0]
In surgery, any micro-delay can injure a patient severely and in some cases, result in fatality.
Current surgical robots use calibrated sensors to measure the position of the arms and tools.
In this work we present a purely optical approach that provides a measurement of the tool position in relation to the patient's tissues.
arXiv Detail & Related papers (2020-10-07T13:40:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.