Generalizing monocular colonoscopy image depth estimation by uncertainty-based global and local fusion network
- URL: http://arxiv.org/abs/2409.15006v1
- Date: Mon, 23 Sep 2024 13:30:59 GMT
- Title: Generalizing monocular colonoscopy image depth estimation by uncertainty-based global and local fusion network
- Authors: Sijia Du, Chengfeng Zhou, Suncheng Xiang, Jianwei Xu, Dahong Qian,
- Abstract summary: This study aims to develop a robust framework that generalizes well to real colonoscopy images.
We propose a framework combining a convolutional neural network (CNN) for capturing local features and a Transformer for capturing global information.
- Score: 3.4419856649092746
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Objective: Depth estimation is crucial for endoscopic navigation and manipulation, but obtaining ground-truth depth maps in real clinical scenarios, such as the colon, is challenging. This study aims to develop a robust framework that generalizes well to real colonoscopy images, overcoming challenges like non-Lambertian surface reflection and diverse data distributions. Methods: We propose a framework combining a convolutional neural network (CNN) for capturing local features and a Transformer for capturing global information. An uncertainty-based fusion block was designed to enhance generalization by identifying complementary contributions from the CNN and Transformer branches. The network can be trained with simulated datasets and generalize directly to unseen clinical data without any fine-tuning. Results: Our method is validated on multiple datasets and demonstrates an excellent generalization ability across various datasets and anatomical structures. Furthermore, qualitative analysis in real clinical scenarios confirmed the robustness of the proposed method. Conclusion: The integration of local and global features through the CNN-Transformer architecture, along with the uncertainty-based fusion block, improves depth estimation performance and generalization in both simulated and real-world endoscopic environments. Significance: This study offers a novel approach to estimate depth maps for endoscopy images despite the complex conditions in clinic, serving as a foundation for endoscopic automatic navigation and other clinical tasks, such as polyp detection and segmentation.
Related papers
- Self-supervised Monocular Depth and Pose Estimation for Endoscopy with Generative Latent Priors [10.61978045582697]
3D mapping in endoscopy enables quantitative, holistic lesion characterization within the gastrointestinal (GI) tract.
Existing methods relying on synthetic datasets or complex models often lack generalizability in challenging endoscopic conditions.
We propose a robust self-supervised monocular depth and pose estimation framework that incorporates a Generative Latent Bank and a Variational Autoencoder.
arXiv Detail & Related papers (2024-11-26T15:43:06Z) - CAVE-Net: Classifying Abnormalities in Video Capsule Endoscopy [0.1937002985471497]
We propose an ensemble-based approach to improve diagnostic accuracy in analyzing complex image datasets.
We leverage the unique feature extraction capabilities of each model to enhance the overall accuracy.
By using these methods, the proposed framework, CAVE-Net, provides robust feature discrimination and improved classification results.
arXiv Detail & Related papers (2024-10-26T17:25:08Z) - ToDER: Towards Colonoscopy Depth Estimation and Reconstruction with Geometry Constraint Adaptation [67.22294293695255]
We propose a novel reconstruction pipeline with a bi-directional adaptation architecture named ToDER to get precise depth estimations.
Experimental results demonstrate that our approach can precisely predict depth maps in both realistic and synthetic colonoscopy videos.
arXiv Detail & Related papers (2024-07-23T14:24:26Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - A geometry-aware deep network for depth estimation in monocular
endoscopy [17.425158094539462]
The proposed method is extensively validated across different datasets and clinical images.
The generalizability of the proposed method achieves mean RMSE values of 12.604 (T1-L1), 9.930 (T2-L2), and 13.893 (colon) on the ColonDepth dataset.
arXiv Detail & Related papers (2023-04-20T11:59:32Z) - Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images [55.83984261827332]
In this paper, we propose a novel reliable multi-scale wavelet-enhanced transformer network.
We develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module.
Our proposed method achieves better segmentation accuracy with a high degree of reliability as compared to other state-of-the-art segmentation approaches.
arXiv Detail & Related papers (2022-12-01T07:32:56Z) - ScoreNet: Learning Non-Uniform Attention and Augmentation for
Transformer-Based Histopathological Image Classification [11.680355561258427]
High-resolution images hinder progress in digital pathology.
patch-based processing often incorporates multiple instance learning (MIL) to aggregate local patch-level representations yielding image-level prediction.
This paper proposes a transformer-based architecture specifically tailored for histological image classification.
It combines fine-grained local attention with a coarse global attention mechanism to learn meaningful representations of high-resolution images at an efficient computational cost.
arXiv Detail & Related papers (2022-02-15T16:55:09Z) - Contextual Information Enhanced Convolutional Neural Networks for
Retinal Vessel Segmentation in Color Fundus Images [0.0]
An automatic retinal vessel segmentation system can effectively facilitate clinical diagnosis and ophthalmological research.
A deep learning based method has been proposed and several customized modules have been integrated into the well-known encoder-decoder architecture U-net.
As a result, the proposed method outperforms the work of predecessors and achieves state-of-the-art performance in Sensitivity/Recall, F1-score and MCC.
arXiv Detail & Related papers (2021-03-25T06:10:47Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.