Colonoscopy Coverage Revisited: Identifying Scanning Gaps in Real-Time
- URL: http://arxiv.org/abs/2305.10026v1
- Date: Wed, 17 May 2023 08:12:56 GMT
- Title: Colonoscopy Coverage Revisited: Identifying Scanning Gaps in Real-Time
- Authors: G. Leifman and I. Kligvasser and R. Goldenberg and M. Elad and E.
Rivlin
- Abstract summary: Colonoscopy is the most widely used medical technique for preventing Colorectal Cancer, by detecting and removing polyps before they become malignant.
Recent studies show that around one quarter of the existing polyps are routinely missed.
While some of these do appear in the endoscopist's field of view, others are missed due to a partial coverage of the colon.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Colonoscopy is the most widely used medical technique for preventing
Colorectal Cancer, by detecting and removing polyps before they become
malignant. Recent studies show that around one quarter of the existing polyps
are routinely missed. While some of these do appear in the endoscopist's field
of view, others are missed due to a partial coverage of the colon. The task of
detecting and marking unseen regions of the colon has been addressed in recent
work, where the common approach is based on dense 3D reconstruction, which
proves to be challenging due to lack of 3D ground truth and periods with poor
visual content. In this paper we propose a novel and complementary method to
detect deficient local coverage in real-time for video segments where a
reliable 3D reconstruction is impossible. Our method aims to identify skips
along the colon caused by a drifted position of the endoscope during poor
visibility time intervals. The proposed solution consists of two phases. During
the first, time segments with good visibility of the colon and gaps between
them are identified. During the second phase, a trained model operates on each
gap, answering the question: Do you observe the same scene before and after the
gap? If the answer is negative, the endoscopist is alerted and can be directed
to the appropriate area in real-time. The second phase model is trained using a
contrastive loss based on auto-generated examples. Our method evaluation on a
dataset of 250 procedures annotated by trained physicians provides sensitivity
of 0.75 with specificity of 0.9.
Related papers
- Frontiers in Intelligent Colonoscopy [96.57251132744446]
This study investigates the frontiers of intelligent colonoscopy techniques and their prospective implications for multimodal medical applications.
We assess the current data-centric and model-centric landscapes through four tasks for colonoscopic scene perception.
To embrace the coming multimodal era, we establish three foundational initiatives: a large-scale multimodal instruction tuning dataset ColonINST, a colonoscopy-designed multimodal language model ColonGPT, and a multimodal benchmark.
arXiv Detail & Related papers (2024-10-22T17:57:12Z) - ToDER: Towards Colonoscopy Depth Estimation and Reconstruction with Geometry Constraint Adaptation [67.22294293695255]
We propose a novel reconstruction pipeline with a bi-directional adaptation architecture named ToDER to get precise depth estimations.
Experimental results demonstrate that our approach can precisely predict depth maps in both realistic and synthetic colonoscopy videos.
arXiv Detail & Related papers (2024-07-23T14:24:26Z) - Estimating the coverage in 3d reconstructions of the colon from
colonoscopy videos [0.0]
Insufficient visual coverage of the colon during the procedure often results in missed polyps.
To mitigate this issue, reconstructing the 3D surfaces of the colon in order to visualize the missing regions has been proposed.
We present a new method to estimate the coverage from a reconstructed colon pointcloud.
arXiv Detail & Related papers (2022-10-19T10:53:34Z) - C$^3$Fusion: Consistent Contrastive Colon Fusion, Towards Deep SLAM in
Colonoscopy [0.0]
3D colon reconstruction from Optical Colonoscopy (OC) to detect non-examined surfaces remains an unsolved problem.
Recent methods demonstrate compelling results, but suffer from: (1) frangible frame-to-frame (or frame-to-model) pose estimation resulting in many tracking failures; or (2) rely on point-based representations at the cost of scan quality.
We propose a novel reconstruction framework that addresses these issues end to end, which result in both quantitatively and qualitatively accurate and robust 3D colon reconstruction.
arXiv Detail & Related papers (2022-06-04T10:38:19Z) - Deep Learning-based Biological Anatomical Landmark Detection in
Colonoscopy Videos [21.384094148149003]
We propose a novel deep learning-based approach to detect biological anatomical landmarks in colonoscopy videos.
Average detection accuracy reaches 99.75%, while the average IoU of 0.91 shows a high degree of similarity between our predicted landmark periods and ground truth.
arXiv Detail & Related papers (2021-08-06T05:52:32Z) - Detection of Deepfake Videos Using Long Distance Attention [73.6659488380372]
Most existing detection methods treat the problem as a vanilla binary classification problem.
In this paper, the problem is treated as a special fine-grained classification problem since the differences between fake and real faces are very subtle.
A spatial-temporal model is proposed which has two components for capturing spatial and temporal forgery traces in global perspective.
arXiv Detail & Related papers (2021-06-24T08:33:32Z) - Colonoscopy Polyp Detection: Domain Adaptation From Medical Report
Images to Real-time Videos [76.37907640271806]
We propose an Image-video-joint polyp detection network (Ivy-Net) to address the domain gap between colonoscopy images from historical medical reports and real-time videos.
Experiments on the collected dataset demonstrate that our Ivy-Net achieves the state-of-the-art result on colonoscopy video.
arXiv Detail & Related papers (2020-12-31T10:33:09Z) - Assisted Probe Positioning for Ultrasound Guided Radiotherapy Using
Image Sequence Classification [55.96221340756895]
Effective transperineal ultrasound image guidance in prostate external beam radiotherapy requires consistent alignment between probe and prostate at each session during patient set-up.
We demonstrate a method for ensuring accurate probe placement through joint classification of images and probe position data.
Using a multi-input multi-task algorithm, spatial coordinate data from an optically tracked ultrasound probe is combined with an image clas-sifier using a recurrent neural network to generate two sets of predictions in real-time.
The algorithm identified optimal probe alignment within a mean (standard deviation) range of 3.7$circ$ (1.2$circ$) from
arXiv Detail & Related papers (2020-10-06T13:55:02Z) - A Novel Approach for Correcting Multiple Discrete Rigid In-Plane Motions
Artefacts in MRI Scans [63.28835187934139]
We propose a novel method for removing motion artefacts using a deep neural network with two input branches.
The proposed method can be applied to artefacts generated by multiple movements of the patient.
arXiv Detail & Related papers (2020-06-24T15:25:11Z) - Colonoscope tracking method based on shape estimation network [36.08151254973927]
A colonoscope navigation system is necessary to reduce overlooking of polyps.
We propose a colonoscope tracking method for navigation systems.
We utilize the shape estimation network (SEN), which estimates deformed colon shape during colonoscope insertions.
arXiv Detail & Related papers (2020-04-20T05:10:38Z) - Detecting Deficient Coverage in Colonoscopies [24.21649198309876]
Colonoscopy is the tool of choice for preventing Colorectal Cancer.
However, colonoscopy is hampered by the fact that endoscopists routinely miss 22-28% of polyps.
This paper introduces the C2D2 (Colonoscopy Coverage Deficiency via Depth) algorithm which detects deficient coverage.
arXiv Detail & Related papers (2020-01-23T15:12:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.