Self-Supervised Endoscopic Image Key-Points Matching
- URL: http://arxiv.org/abs/2208.11424v1
- Date: Wed, 24 Aug 2022 10:47:21 GMT
- Title: Self-Supervised Endoscopic Image Key-Points Matching
- Authors: Manel Farhat, Houda Chaabouni-Chouayakh, and Achraf Ben-Hamadou
- Abstract summary: This paper proposes a novel self-supervised approach for endoscopic image matching based on deep learning techniques.
Our method outperformed standard hand-crafted local feature descriptors in terms of precision and recall.
- Score: 1.3764085113103222
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Feature matching and finding correspondences between endoscopic images is a
key step in many clinical applications such as patient follow-up and generation
of panoramic image from clinical sequences for fast anomalies localization.
Nonetheless, due to the high texture variability present in endoscopic images,
the development of robust and accurate feature matching becomes a challenging
task. Recently, deep learning techniques which deliver learned features
extracted via convolutional neural networks (CNNs) have gained traction in a
wide range of computer vision tasks. However, they all follow a supervised
learning scheme where a large amount of annotated data is required to reach
good performances, which is generally not always available for medical data
databases. To overcome this limitation related to labeled data scarcity, the
self-supervised learning paradigm has recently shown great success in a number
of applications. This paper proposes a novel self-supervised approach for
endoscopic image matching based on deep learning techniques. When compared to
standard hand-crafted local feature descriptors, our method outperformed them
in terms of precision and recall. Furthermore, our self-supervised descriptor
provides a competitive performance in comparison to a selection of
state-of-the-art deep learning based supervised methods in terms of precision
and matching score.
Related papers
- RIDE: Self-Supervised Learning of Rotation-Equivariant Keypoint
Detection and Invariant Description for Endoscopy [83.4885991036141]
RIDE is a learning-based method for rotation-equivariant detection and invariant description.
It is trained in a self-supervised manner on a large curation of endoscopic images.
It sets a new state-of-the-art performance on matching and relative pose estimation tasks.
arXiv Detail & Related papers (2023-09-18T08:16:30Z) - Graph Self-Supervised Learning for Endoscopic Image Matching [1.8275108630751844]
We propose a novel self-supervised approach that combines Convolutional Neural Networks for capturing local visual appearance and attention-based Graph Neural Networks for modeling spatial relationships between key-points.
Our approach is trained in a fully self-supervised scheme without the need for labeled data.
Our approach outperforms state-of-the-art handcrafted and deep learning-based methods, demonstrating exceptional performance in terms of precision rate (1) and matching score (99.3%)
arXiv Detail & Related papers (2023-06-19T19:53:41Z) - Weakly Supervised Intracranial Hemorrhage Segmentation using Head-Wise
Gradient-Infused Self-Attention Maps from a Swin Transformer in Categorical
Learning [0.6269243524465492]
Intracranial hemorrhage (ICH) is a life-threatening medical emergency that requires timely diagnosis and accurate treatment.
Deep learning techniques have emerged as the leading approach for medical image analysis and processing.
We introduce a novel weakly supervised method for ICH segmentation, utilizing a Swin transformer trained on an ICH classification task with categorical labels.
arXiv Detail & Related papers (2023-04-11T00:17:34Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Self supervised contrastive learning for digital histopathology [0.0]
We use a contrastive self-supervised learning method called SimCLR that achieved state-of-the-art results on natural-scene images.
We find that combining multiple multi-organ datasets with different types of staining and resolution properties improves the quality of the learned features.
Linear classifiers trained on top of the learned features show that networks pretrained on digital histopathology datasets perform better than ImageNet pretrained networks.
arXiv Detail & Related papers (2020-11-27T19:18:45Z) - Self-Loop Uncertainty: A Novel Pseudo-Label for Semi-Supervised Medical
Image Segmentation [30.644905857223474]
We propose a semi-supervised approach to train neural networks with limited labeled data and a large quantity of unlabeled images for medical image segmentation.
A novel pseudo-label (namely self-loop uncertainty) is adopted as the ground-truth for the unlabeled images to augment the training set and boost the segmentation accuracy.
arXiv Detail & Related papers (2020-07-20T02:52:07Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Deep Q-Network-Driven Catheter Segmentation in 3D US by Hybrid
Constrained Semi-Supervised Learning and Dual-UNet [74.22397862400177]
We propose a novel catheter segmentation approach, which requests fewer annotations than the supervised learning method.
Our scheme considers a deep Q learning as the pre-localization step, which avoids voxel-level annotation.
With the detected catheter, patch-based Dual-UNet is applied to segment the catheter in 3D volumetric data.
arXiv Detail & Related papers (2020-06-25T21:10:04Z) - Breast lesion segmentation in ultrasound images with limited annotated
data [2.905751301655124]
We propose the use of simulated US images and natural images as auxiliary datasets in order to pre-train our segmentation network.
We show that fine-tuning the pre-trained network improves the dice score by 21% compared to training from scratch.
arXiv Detail & Related papers (2020-01-21T03:34:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.