AI pipeline for accurate retinal layer segmentation using OCT 3D images
- URL: http://arxiv.org/abs/2302.07806v1
- Date: Wed, 15 Feb 2023 17:46:32 GMT
- Title: AI pipeline for accurate retinal layer segmentation using OCT 3D images
- Authors: Mayank Goswami
- Abstract summary: Several classical and AI-based algorithms in combination are tested to see their compatibility with data from the combined animal imaging system.
A simple-to-implement analytical equation is shown to be working for brightness manipulation with a 1% increment in mean pixel values.
The thickness estimation process has a 6% error as compared to manual annotated standard data.
- Score: 3.938455123895825
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Image data set from a multi-spectral animal imaging system is used to address
two issues: (a) registering the oscillation in optical coherence tomography
(OCT) images due to mouse eye movement and (b) suppressing the shadow region
under the thick vessels/structures. Several classical and AI-based algorithms
in combination are tested for each task to see their compatibility with data
from the combined animal imaging system. Hybridization of AI with optical flow
followed by Homography transformation is shown to be working (correlation
value>0.7) for registration. Resnet50 backbone is shown to be working better
than the famous U-net model for shadow region detection with a loss value of
0.9. A simple-to-implement analytical equation is shown to be working for
brightness manipulation with a 1% increment in mean pixel values and a 77%
decrease in the number of zeros. The proposed equation allows formulating a
constraint optimization problem using a controlling factor {\alpha} for
minimization of number of zeros, standard deviation of pixel value and
maximizing the mean pixel value. For Layer segmentation, the standard U-net
model is used. The AI-Pipeline consists of CNN, Optical flow, RCNN, pixel
manipulation model, and U-net models in sequence. The thickness estimation
process has a 6% error as compared to manual annotated standard data.
Related papers
- Linear Anchored Gaussian Mixture Model for Location and Width Computations of Objects in Thick Line Shape [1.7205106391379021]
3D image gray level representation is considered as a finite mixture model of a statistical distribution.
Expectation-Maximization algorithm (Algo1) using the original image as input data is used to estimate the model parameters.
modified EM algorithm (Algo2) is detailed.
arXiv Detail & Related papers (2024-04-03T20:05:00Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [59.968362815126326]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring [48.80983873199214]
We develop a data-driven approach to model the saturated pixels by a learned latent map.
Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem.
To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network.
arXiv Detail & Related papers (2023-08-10T12:53:30Z) - Decoupled Diffusion Models: Simultaneous Image to Zero and Zero to Noise [53.04220377034574]
We propose decoupled diffusion models (DDMs) for high-quality (un)conditioned image generation in less than 10 function evaluations.
We mathematically derive 1) the training objectives and 2) for the reverse time the sampling formula based on an analytic transition probability which models image to zero transition.
We experimentally yield very competitive performance compared with the state of the art in 1) unconditioned image generation, textite.g., CIFAR-10 and CelebA-HQ-256 and 2) image-conditioned downstream tasks such as super-resolution, saliency detection, edge detection, and image in
arXiv Detail & Related papers (2023-06-23T18:08:00Z) - Enhanced Sharp-GAN For Histopathology Image Synthesis [63.845552349914186]
Histopathology image synthesis aims to address the data shortage issue in training deep learning approaches for accurate cancer detection.
We propose a novel approach that enhances the quality of synthetic images by using nuclei topology and contour regularization.
The proposed approach outperforms Sharp-GAN in all four image quality metrics on two datasets.
arXiv Detail & Related papers (2023-01-24T17:54:01Z) - One-Stage Deep Edge Detection Based on Dense-Scale Feature Fusion and
Pixel-Level Imbalance Learning [5.370848116287344]
We propose a one-stage neural network model that can generate high-quality edge images without postprocessing.
The proposed model adopts a classic encoder-decoder framework in which a pre-trained neural model is used as the encoder.
We propose a new loss function that addresses the pixel-level imbalance in the edge image.
arXiv Detail & Related papers (2022-03-17T15:26:00Z) - FlowReg: Fast Deformable Unsupervised Medical Image Registration using
Optical Flow [0.09167082845109438]
FlowReg is a framework for unsupervised image registration for neuroimaging applications.
FlowReg is able to obtain high intensity and spatial similarity while maintaining the shape and structure of anatomy and pathology.
arXiv Detail & Related papers (2021-01-24T03:51:34Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z) - Locally Masked Convolution for Autoregressive Models [107.4635841204146]
LMConv is a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image.
We learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation.
arXiv Detail & Related papers (2020-06-22T17:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.