Autonomous Intraluminal Navigation of a Soft Robot using
Deep-Learning-based Visual Servoing
- URL: http://arxiv.org/abs/2207.00401v1
- Date: Fri, 1 Jul 2022 13:17:45 GMT
- Title: Autonomous Intraluminal Navigation of a Soft Robot using
Deep-Learning-based Visual Servoing
- Authors: Jorge F. Lazo and Chun-Feng Lai and Sara Moccia and Benoit Rosa and
Michele Catellani and Michel de Mathelin and Giancarlo Ferrigno and Paul
Breedveld and Jenny Dankelman and Elena De Momi
- Abstract summary: We present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot.
Visual servoing, based on Convolutional Neural Networks (CNNs), is used to achieve the autonomous navigation task.
The proposed robot is validated in anatomical phantoms in different path configurations.
- Score: 13.268863900187025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Navigation inside luminal organs is an arduous task that requires
non-intuitive coordination between the movement of the operator's hand and the
information obtained from the endoscopic video. The development of tools to
automate certain tasks could alleviate the physical and mental load of doctors
during interventions, allowing them to focus on diagnosis and decision-making
tasks. In this paper, we present a synergic solution for intraluminal
navigation consisting of a 3D printed endoscopic soft robot that can move
safely inside luminal structures. Visual servoing, based on Convolutional
Neural Networks (CNNs) is used to achieve the autonomous navigation task. The
CNN is trained with phantoms and in-vivo data to segment the lumen, and a
model-less approach is presented to control the movement in constrained
environments. The proposed robot is validated in anatomical phantoms in
different path configurations. We analyze the movement of the robot using
different metrics such as task completion time, smoothness, error in the
steady-state, and mean and maximum error. We show that our method is suitable
to navigate safely in hollow environments and conditions which are different
than the ones the network was originally trained on.
Related papers
- LPAC: Learnable Perception-Action-Communication Loops with Applications
to Coverage Control [80.86089324742024]
We propose a learnable Perception-Action-Communication (LPAC) architecture for the problem.
CNN processes localized perception; a graph neural network (GNN) facilitates robot communications.
Evaluations show that the LPAC models outperform standard decentralized and centralized coverage control algorithms.
arXiv Detail & Related papers (2024-01-10T00:08:00Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Collective Intelligence for 2D Push Manipulations with Mobile Robots [18.937030864563752]
We show that by distilling a planner from a differentiable soft-body physics simulator into an attention-based neural network, our multi-robot push manipulation system achieves better performance than baselines.
Our system also generalizes to configurations not seen during training and is able to adapt toward task completions when external turbulence and environmental changes are applied.
arXiv Detail & Related papers (2022-11-28T08:48:58Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - Gesture2Path: Imitation Learning for Gesture-aware Navigation [54.570943577423094]
We present Gesture2Path, a novel social navigation approach that combines image-based imitation learning with model-predictive control.
We deploy our method on real robots and showcase the effectiveness of our approach for the four gestures-navigation scenarios.
arXiv Detail & Related papers (2022-09-19T23:05:36Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Autonomously Navigating a Surgical Tool Inside the Eye by Learning from
Demonstration [28.720332497794292]
We propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task.
A deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user.
We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 microns accuracy in physical experiments and 94 microns in simulation on average.
arXiv Detail & Related papers (2020-11-16T08:30:02Z) - Robot Perception enables Complex Navigation Behavior via Self-Supervised
Learning [23.54696982881734]
We propose an approach to unify successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL)
Our method temporally incorporates compact motion and visual perception data, directly obtained using self-supervision from a single image sequence.
We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework.
arXiv Detail & Related papers (2020-06-16T07:45:47Z) - A Spiking Neural Network Emulating the Structure of the Oculomotor
System Requires No Learning to Control a Biomimetic Robotic Head [0.0]
A neuromorphic oculomotor controller is placed at the heart of our in-house biomimetic robotic head prototype.
The controller is unique in the sense that all data are encoded and processed by a spiking neural network (SNN)
We report the robot's target tracking ability, demonstrate that its eye kinematics are similar to those reported in human eye studies and show that a biologically-constrained learning can be used to further refine its performance.
arXiv Detail & Related papers (2020-02-18T13:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.