Multi-scale Sparse Representation-Based Shadow Inpainting for Retinal
OCT Images
- URL: http://arxiv.org/abs/2202.11377v1
- Date: Wed, 23 Feb 2022 09:37:14 GMT
- Title: Multi-scale Sparse Representation-Based Shadow Inpainting for Retinal
OCT Images
- Authors: Yaoqi Tang, Yufan Li, Hongshan Liu, Jiaxuan Li, Peiyao Jin, Yu Gan,
Yuye Ling, and Yikai Su
- Abstract summary: Inpainting shadowed regions cast by superficial blood vessels in retinal optical coherence tomography ( OCT) images is critical for accurate and robust machine analysis and clinical diagnosis.
Traditional sequence-based approaches such as propagating neighboring information to gradually fill in the missing regions are cost-effective.
Deep learning-based methods such as encoder-decoder networks have shown promising results in natural image inpainting tasks.
We propose a novel multi-scale shadow inpainting framework for OCT images by synergically applying sparse representation and deep learning.
- Score: 0.261990490798442
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inpainting shadowed regions cast by superficial blood vessels in retinal
optical coherence tomography (OCT) images is critical for accurate and robust
machine analysis and clinical diagnosis. Traditional sequence-based approaches
such as propagating neighboring information to gradually fill in the missing
regions are cost-effective. But they generate less satisfactory outcomes when
dealing with larger missing regions and texture-rich structures. Emerging deep
learning-based methods such as encoder-decoder networks have shown promising
results in natural image inpainting tasks. However, they typically need a long
computational time for network training in addition to the high demand on the
size of datasets, which makes it difficult to be applied on often small medical
datasets. To address these challenges, we propose a novel multi-scale shadow
inpainting framework for OCT images by synergically applying sparse
representation and deep learning: sparse representation is used to extract
features from a small amount of training images for further inpainting and to
regularize the image after the multi-scale image fusion, while convolutional
neural network (CNN) is employed to enhance the image quality. During the image
inpainting, we divide preprocessed input images into different branches based
on the shadow width to harvest complementary information from different scales.
Finally, a sparse representation-based regularizing module is designed to
refine the generated contents after multi-scale feature aggregation.
Experiments are conducted to compare our proposal versus both traditional and
deep learning-based techniques on synthetic and real-world shadows. Results
demonstrate that our proposed method achieves favorable image inpainting in
terms of visual quality and quantitative metrics, especially when wide shadows
are presented.
Related papers
- Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Synthetic optical coherence tomography angiographs for detailed retinal
vessel segmentation without human annotations [12.571349114534597]
We present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis.
We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets.
arXiv Detail & Related papers (2023-06-19T14:01:47Z) - CEC-CNN: A Consecutive Expansion-Contraction Convolutional Network for
Very Small Resolution Medical Image Classification [0.8108972030676009]
We introduce a new CNN architecture which preserves multi-scale features from deep, intermediate, and shallow layers.
Using a dataset of very low resolution patches from Pancreatic Ductal Adenocarcinoma (PDAC) CT scans we demonstrate that our network can outperform current state of the art models.
arXiv Detail & Related papers (2022-09-27T20:01:12Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Learning Ultrasound Rendering from Cross-Sectional Model Slices for
Simulated Training [13.640630434743837]
Computational simulations can facilitate the training of such skills in virtual reality.
We propose herein to bypass any rendering and simulation process at interactive time.
We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme.
arXiv Detail & Related papers (2021-01-20T21:58:19Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.