The secret role of undesired physical effects in accurate shape sensing
with eccentric FBGs
- URL: http://arxiv.org/abs/2210.16316v1
- Date: Fri, 28 Oct 2022 09:07:08 GMT
- Title: The secret role of undesired physical effects in accurate shape sensing
with eccentric FBGs
- Authors: Samaneh Manavi Roodsari, Sara Freund, Martin Angelmahr, Georg Rauter,
Azhar Zam, Wolfgang Schade, and Philippe C. Cattin
- Abstract summary: Eccentric fiber Bragg gratings (FBG) are cheap and easy-to-fabricate shape sensors that are often interrogated with simple setups.
Here, we present a novel technique to overcome these limitations and provide accurate and precise shape estimation.
- Score: 1.0805335573008565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fiber optic shape sensors have enabled unique advances in various navigation
tasks, from medical tool tracking to industrial applications. Eccentric fiber
Bragg gratings (FBG) are cheap and easy-to-fabricate shape sensors that are
often interrogated with simple setups. However, using low-cost interrogation
systems for such intensity-based quasi-distributed sensors introduces further
complications to the sensor's signal. Therefore, eccentric FBGs have not been
able to accurately estimate complex multi-bend shapes. Here, we present a novel
technique to overcome these limitations and provide accurate and precise shape
estimation in eccentric FBG sensors. We investigate the most important
bending-induced effects in curved optical fibers that are usually eliminated in
intensity-based fiber sensors. These effects contain shape deformation
information with a higher spatial resolution that we are now able to extract
using deep learning techniques. We design a deep learning model based on a
convolutional neural network that is trained to predict shapes given the
sensor's spectra. We also provide a visual explanation, highlighting wavelength
elements whose intensities are more relevant in making shape predictions. These
findings imply that deep learning techniques benefit from the bending-induced
effects that impact the desired signal in a complex manner. This is the first
step toward cheap yet accurate fiber shape sensing solutions.
Related papers
- Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - GelFlow: Self-supervised Learning of Optical Flow for Vision-Based
Tactile Sensor Displacement Measurement [23.63445828014235]
This study proposes a self-supervised optical flow method based on deep learning to achieve high accuracy in displacement measurement for vision-based tactile sensors.
We trained the proposed self-supervised network using an open-source dataset and compared it with traditional and deep learning-based optical flow methods.
arXiv Detail & Related papers (2023-09-13T05:48:35Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Multi-mode fiber reservoir computing overcomes shallow neural networks
classifiers [8.891157811906407]
We recast multi-mode optical fibers into random hardware projectors, transforming an input dataset into a speckled image set.
We find that the hardware operates in a flatter region of the loss landscape when trained on fiber data, which aligns with the current theory of deep neural networks.
arXiv Detail & Related papers (2022-10-10T14:55:02Z) - On Learning the Invisible in Photoacoustic Tomography with Flat
Directionally Sensitive Detector [0.27074235008521236]
In this paper, we focus on the second type caused by a varying sensitivity of the sensor to the incoming wavefront direction.
The visible ranges, in image and data domains, are related by the wavefront direction mapping.
We optimally combine fast approximate operators with tailored deep neural network architectures into efficient learned reconstruction methods.
arXiv Detail & Related papers (2022-04-21T09:57:01Z) - PhysFormer: Facial Video-based Physiological Measurement with Temporal
Difference Transformer [55.936527926778695]
Recent deep learning approaches focus on mining subtle r clues using convolutional neural networks with limited-temporal receptive fields.
In this paper, we propose the PhysFormer, an end-to-end video transformer based architecture.
arXiv Detail & Related papers (2021-11-23T18:57:11Z) - Sensor-Guided Optical Flow [53.295332513139925]
This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy on known or unseen domains.
We show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms.
arXiv Detail & Related papers (2021-09-30T17:59:57Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - GEM: Glare or Gloom, I Can Still See You -- End-to-End Multimodal Object
Detector [11.161639542268015]
We propose sensor-aware multi-modal fusion strategies for 2D object detection in harsh-lighting conditions.
Our network learns to estimate the measurement reliability of each sensor modality in the form of scalar weights and masks.
We show that the proposed strategies out-perform the existing state-of-the-art methods on the FLIR-Thermal dataset.
arXiv Detail & Related papers (2021-02-24T14:56:37Z) - Monocular Depth Estimation for Soft Visuotactile Sensors [24.319343057803973]
We investigate the application of state-of-the-art monocular depth estimation to infer dense internal (tactile) depth maps directly from an internal single small IR imaging sensor.
We show that deep networks typically used for long-range depth estimation (1-100m) can be effectively trained for precise predictions at a much shorter range (1-100mm) inside a mostly textureless deformable fluid-filled sensor.
We propose a simple supervised learning process to train an object-agnostic network requiring less than 10 random poses in contact for less than 10 seconds for a small set of diverse objects.
arXiv Detail & Related papers (2021-01-05T17:51:11Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.