A Cranial-Feature-Based Registration Scheme for Robotic Micromanipulation Using a Microscopic Stereo Camera System
- URL: http://arxiv.org/abs/2410.18630v1
- Date: Thu, 24 Oct 2024 10:40:38 GMT
- Title: A Cranial-Feature-Based Registration Scheme for Robotic Micromanipulation Using a Microscopic Stereo Camera System
- Authors: Xiaofeng Lin, Saúl Alexis Heredia Pérez, Kanako Harada,
- Abstract summary: The study introduces a microscopic stereo camera system (MSCS) enhanced by the linear model for depth perception.
A precise registration scheme is developed for the partially exposed mouse cranial surface, employing a CNN-based constrained and colorized registration strategy.
These methods are integrated with the MSCS for robotic micromanipulation tasks.
- Score: 3.931620400433609
- License:
- Abstract: Biological specimens exhibit significant variations in size and shape, challenging autonomous robotic manipulation. We focus on the mouse skull window creation task to illustrate these challenges. The study introduces a microscopic stereo camera system (MSCS) enhanced by the linear model for depth perception. Alongside this, a precise registration scheme is developed for the partially exposed mouse cranial surface, employing a CNN-based constrained and colorized registration strategy. These methods are integrated with the MSCS for robotic micromanipulation tasks. The MSCS demonstrated a high precision of 0.10 mm $\pm$ 0.02 mm measured in a step height experiment and real-time performance of 30 FPS in 3D reconstruction. The registration scheme proved its precision, with a translational error of 1.13 mm $\pm$ 0.31 mm and a rotational error of 3.38$^{\circ}$ $\pm$ 0.89$^{\circ}$ tested on 105 continuous frames with an average speed of 1.60 FPS. This study presents the application of a MSCS and a novel registration scheme in enhancing the precision and accuracy of robotic micromanipulation in scientific and surgical settings. The innovations presented here offer automation methodology in handling the challenges of microscopic manipulation, paving the way for more accurate, efficient, and less invasive procedures in various fields of microsurgery and scientific research.
Related papers
- Robotic Arm Platform for Multi-View Image Acquisition and 3D Reconstruction in Minimally Invasive Surgery [40.55055153469741]
This work introduces a robotic arm platform for efficient multi-view image acquisition and precise 3D reconstruction in Minimally invasive surgery settings.
We adapted a laparoscope to a robotic arm and captured ex-vivo images of several ovine organs across varying lighting conditions.
We employed recently released learning-based feature matchers combined with COLMAP to produce our reconstructions.
arXiv Detail & Related papers (2024-10-15T15:42:30Z) - Enhancing Precision in Tactile Internet-Enabled Remote Robotic Surgery: Kalman Filter Approach [0.0]
This paper presents a Kalman Filter (KF) based computationally efficient position estimation method.
The study also assume no prior knowledge of the dynamic system model of the robotic arm system.
We investigate the effectiveness of KF to determine the position of the Patient Side Manipulator (PSM) under simulated network conditions.
arXiv Detail & Related papers (2024-06-06T20:56:53Z) - Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose
Estimation of Surgical Instruments [66.74633676595889]
We present a multi-camera capture setup consisting of static and head-mounted cameras.
Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre.
Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments.
arXiv Detail & Related papers (2023-05-05T13:42:19Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Fast Autofocusing using Tiny Networks for Digital Holographic Microscopy [0.5057148335041798]
A deep learning (DL) solution is proposed to cast the autofocusing as a regression problem and tested over both experimental and simulated holograms.
Experiments show that the predicted focusing distance $Z_RmathrmPred$ is accurately inferred with an accuracy of 1.2 $mu$m.
Models reach state of the art inference time on CPU, less than 25 ms per inference.
arXiv Detail & Related papers (2022-03-15T10:52:58Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Efficiently Calibrating Cable-Driven Surgical Robots with RGBD Fiducial
Sensing and Recurrent Neural Networks [26.250886014613762]
We propose a novel approach to efficiently calibrate such robots by placing a 3D printed fiducial coordinate frames on the arm and end-effector that is tracked using RGBD sensing.
With the proposed method, data collection of 1800 samples takes 31 minutes and model training takes under 1 minute.
Results on a test set of reference trajectories suggest that the trained model can reduce the mean tracking error of the physical robot from 2.96 mm to 0.65 mm.
arXiv Detail & Related papers (2020-03-19T00:24:56Z) - Limited Angle Tomography for Transmission X-Ray Microscopy Using Deep
Learning [12.991428974915795]
Deep learning is applied to limited angle reconstruction in X-ray microscopy for the first time.
The U-Net, the state-of-the-art neural network in biomedical imaging, is trained from synthetic data.
The proposed method remarkably improves the 3-D visualization of the subcellular structures in the chlorella cell.
arXiv Detail & Related papers (2020-01-08T12:11:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.