Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation
- URL: http://arxiv.org/abs/2109.11798v1
- Date: Fri, 24 Sep 2021 08:11:34 GMT
- Title: Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation
- Authors: Mert Asim Karaoglu, Nikolas Brasch, Marijn Stollenga, Wolfgang Wein,
Nassir Navab, Federico Tombari and Alexander Ladikos
- Abstract summary: In this work, we propose an alternative domain-adaptive approach to depth estimation.
Our novel two-step structure first trains a depth estimation network with labeled synthetic images in a supervised manner.
The results of our experiments show that the proposed method improves the network's performance on real images by a considerable margin.
- Score: 111.89519571205778
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Depth estimation from monocular images is an important task in localization
and 3D reconstruction pipelines for bronchoscopic navigation. Various
supervised and self-supervised deep learning-based approaches have proven
themselves on this task for natural images. However, the lack of labeled data
and the bronchial tissue's feature-scarce texture make the utilization of these
methods ineffective on bronchoscopic scenes. In this work, we propose an
alternative domain-adaptive approach. Our novel two-step structure first trains
a depth estimation network with labeled synthetic images in a supervised
manner; then adopts an unsupervised adversarial domain feature adaptation
scheme to improve the performance on real images. The results of our
experiments show that the proposed method improves the network's performance on
real images by a considerable margin and can be employed in 3D reconstruction
pipelines.
Related papers
- Enhancing Bronchoscopy Depth Estimation through Synthetic-to-Real Domain Adaptation [2.795503750654676]
We propose a transfer learning framework that leverages synthetic data with depth labels for training and adapts domain knowledge for accurate depth estimation in real bronchoscope data.
Our network demonstrates improved depth prediction on real footage using domain adaptation compared to training solely on synthetic data, validating our approach.
arXiv Detail & Related papers (2024-11-07T03:48:35Z) - Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy [3.1186464715409983]
We introduce a novel fine-tuning strategy for the Depth Anything Model.
We integrate it with an intrinsic-based unsupervised monocular depth estimation framework.
Our results on the SCARED dataset show that our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-09-12T03:04:43Z) - MaRINeR: Enhancing Novel Views by Matching Rendered Images with Nearby References [49.71130133080821]
MaRINeR is a refinement method that leverages information of a nearby mapping image to improve the rendering of a target viewpoint.
We show improved renderings in quantitative metrics and qualitative examples from both explicit and implicit scene representations.
arXiv Detail & Related papers (2024-07-18T17:50:03Z) - High-fidelity Endoscopic Image Synthesis by Utilizing Depth-guided Neural Surfaces [18.948630080040576]
We introduce a novel method for colon section reconstruction by leveraging NeuS applied to endoscopic images, supplemented by a single frame of depth map.
Our approach demonstrates exceptional accuracy in completely rendering colon sections, even capturing unseen portions of the surface.
This breakthrough opens avenues for achieving stable and consistently scaled reconstructions, promising enhanced quality in cancer screening procedures and treatment interventions.
arXiv Detail & Related papers (2024-04-20T18:06:26Z) - ReContrast: Domain-Specific Anomaly Detection via Contrastive
Reconstruction [29.370142078092375]
Most advanced unsupervised anomaly detection (UAD) methods rely on modeling feature representations of frozen encoder networks pre-trained on large-scale datasets.
We propose a novel epistemic UAD method, namely ReContrast, which optimize the entire network to reduce biases towards the pre-trained image domain.
We conduct experiments across two popular industrial defect detection benchmarks and three medical image UAD tasks, which shows our superiority over current state-of-the-art methods.
arXiv Detail & Related papers (2023-06-05T05:21:15Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - A comparison of different atmospheric turbulence simulation methods for
image restoration [64.24948495708337]
Atmospheric turbulence deteriorates the quality of images captured by long-range imaging systems.
Various deep learning-based atmospheric turbulence mitigation methods have been proposed in the literature.
We systematically evaluate the effectiveness of various turbulence simulation methods on image restoration.
arXiv Detail & Related papers (2022-04-19T16:21:36Z) - Localized Persistent Homologies for more Effective Deep Learning [60.78456721890412]
We introduce an approach that relies on a new filtration function to account for location during network training.
We demonstrate experimentally on 2D images of roads and 3D image stacks of neuronal processes that networks trained in this manner are better at recovering the topology of the curvilinear structures they extract.
arXiv Detail & Related papers (2021-10-12T19:28:39Z) - 3D Human Texture Estimation from a Single Image with Transformers [106.6320286821364]
We propose a Transformer-based framework for 3D human texture estimation from a single image.
We also propose a mask-fusion strategy to combine the advantages of the RGB-based and texture-flow-based models.
arXiv Detail & Related papers (2021-09-06T16:00:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.