Shape Reconstruction from Thoracoscopic Images using Self-supervised
Virtual Learning
- URL: http://arxiv.org/abs/2301.10863v1
- Date: Wed, 25 Jan 2023 23:08:41 GMT
- Title: Shape Reconstruction from Thoracoscopic Images using Self-supervised
Virtual Learning
- Authors: Tomoki Oya, Megumi Nakao, Tetsuya Matsuda
- Abstract summary: Intraoperative shape reconstruction of organs from endoscopic camera images is a complex yet indispensable technique for image-guided surgery.
We propose a framework for generative virtual learning of shape reconstruction using image translation with common latent variables between simulated and real images.
In this study, we targeted the shape reconstruction of collapsed lungs from thoracoscopic images and confirmed that virtual learning could improve the similarity between real and simulated images.
- Score: 2.4493299476776778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intraoperative shape reconstruction of organs from endoscopic camera images
is a complex yet indispensable technique for image-guided surgery. To address
the uncertainty in reconstructing entire shapes from single-viewpoint occluded
images, we propose a framework for generative virtual learning of shape
reconstruction using image translation with common latent variables between
simulated and real images. As it is difficult to prepare sufficient amount of
data to learn the relationship between endoscopic images and organ shapes,
self-supervised virtual learning is performed using simulated images generated
from statistical shape models. However, small differences between virtual and
real images can degrade the estimation performance even if the simulated images
are regarded as equivalent by humans. To address this issue, a Variational
Autoencoder is used to convert real and simulated images into identical
synthetic images. In this study, we targeted the shape reconstruction of
collapsed lungs from thoracoscopic images and confirmed that virtual learning
could improve the similarity between real and simulated images. Furthermore,
shape reconstruction error could be improved by 16.9%.
Related papers
- MeshBrush: Painting the Anatomical Mesh with Neural Stylization for Endoscopy [0.8437187555622164]
Style transfer is a promising approach to close the sim-to-real gap in medical endoscopy.
rendering synthetic endoscopic videos by traversing pre-operative scans can generate structurally accurate simulations.
CycleGAN can imitate realistic endoscopic images from these simulations, but they are unsuitable for video-to-video synthesis.
We propose MeshBrush, a neural mesh stylization method to synthesize temporally consistent videos.
arXiv Detail & Related papers (2024-04-03T18:40:48Z) - SADIR: Shape-Aware Diffusion Models for 3D Image Reconstruction [2.2954246824369218]
3D image reconstruction from a limited number of 2D images has been a long-standing challenge in computer vision and image analysis.
We propose a shape-aware network based on diffusion models for 3D image reconstruction, named SADIR, to address these issues.
arXiv Detail & Related papers (2023-09-06T19:30:22Z) - Translating Simulation Images to X-ray Images via Multi-Scale Semantic
Matching [16.175115921436582]
We propose a new method to translate simulation images from an endovascular simulator to X-ray images.
We apply self-domain semantic matching to ensure that the input image and the generated image have the same positional semantic relationships.
Our method generates realistic X-ray images and outperforms other state-of-the-art approaches by a large margin.
arXiv Detail & Related papers (2023-04-16T04:49:46Z) - Multiscale Voxel Based Decoding For Enhanced Natural Image
Reconstruction From Brain Activity [0.22940141855172028]
We present a novel approach for enhanced image reconstruction, in which existing methods for object decoding and image reconstruction are merged together.
This is achieved by conditioning the reconstructed image to its decoded image category using a class-conditional generative adversarial network and neural style transfer.
The results indicate that our approach improves the semantic similarity of the reconstructed images and can be used as a general framework for enhanced image reconstruction.
arXiv Detail & Related papers (2022-05-27T18:09:07Z) - A comparison of different atmospheric turbulence simulation methods for
image restoration [64.24948495708337]
Atmospheric turbulence deteriorates the quality of images captured by long-range imaging systems.
Various deep learning-based atmospheric turbulence mitigation methods have been proposed in the literature.
We systematically evaluate the effectiveness of various turbulence simulation methods on image restoration.
arXiv Detail & Related papers (2022-04-19T16:21:36Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.