ADC-Net: An Open-Source Deep Learning Network for Automated Dispersion
Compensation in Optical Coherence Tomography
- URL: http://arxiv.org/abs/2201.12625v1
- Date: Sat, 29 Jan 2022 17:23:46 GMT
- Title: ADC-Net: An Open-Source Deep Learning Network for Automated Dispersion
Compensation in Optical Coherence Tomography
- Authors: Shaiban Ahmed (1), David Le (1), Taeyoon Son (1), Tobiloba Adejumo
(1), and Xincheng Yao (1,2) (1) Department of Biomedical Engineering,
University of Illinois at Chicago (2) Department of Ophthalmology and Visual
Science, University of Illinois at Chicago
- Abstract summary: This study is to develop a deep learning network for automated dispersion compensation (ADC-Net) in optical coherence tomography ( OCT)
The ADC-Net is based on a redesigned UNet architecture which employs an encoder-decoder pipeline.
Two numeric parameters, i.e., peak signal to noise ratio (PSNR) and structural similarity index metric computed at multiple scales (MS-SSIM) were used for objective assessment of the ADC-Net performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chromatic dispersion is a common problem to degrade the system resolution in
optical coherence tomography (OCT). This study is to develop a deep learning
network for automated dispersion compensation (ADC-Net) in OCT. The ADC-Net is
based on a redesigned UNet architecture which employs an encoder-decoder
pipeline. The input section encompasses partially compensated OCT B-scans with
individual retinal layers optimized. Corresponding output is a fully
compensated OCT B-scans with all retinal layers optimized. Two numeric
parameters, i.e., peak signal to noise ratio (PSNR) and structural similarity
index metric computed at multiple scales (MS-SSIM), were used for objective
assessment of the ADC-Net performance. Comparative analysis of training models,
including single, three, five, seven and nine input channels were implemented.
The five-input channels implementation was observed as the optimal mode for
ADC-Net training to achieve robust dispersion compensation in OCT
Related papers
- Adaptive Multilevel Neural Networks for Parametric PDEs with Error Estimation [0.0]
A neural network architecture is presented to solve high-dimensional parameter-dependent partial differential equations (pPDEs)
It is constructed to map parameters of the model data to corresponding finite element solutions.
It outputs a coarse grid solution and a series of corrections as produced in an adaptive finite element method (AFEM)
arXiv Detail & Related papers (2024-03-19T11:34:40Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - ATASI-Net: An Efficient Sparse Reconstruction Network for Tomographic
SAR Imaging with Adaptive Threshold [13.379416816598873]
This paper proposes a novel efficient sparse unfolding network based on the analytic learned iterative shrinkage thresholding algorithm (ALISTA)
The weight matrix in each layer of ATASI-Net is pre-computed as the solution of an off-line optimization problem.
In addition, adaptive threshold is introduced for each azimuth-range pixel, enabling the threshold shrinkage to be not only layer-varied but also element-wise.
arXiv Detail & Related papers (2022-11-30T09:55:45Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Effective Invertible Arbitrary Image Rescaling [77.46732646918936]
Invertible Neural Networks (INN) are able to increase upscaling accuracy significantly by optimizing the downscaling and upscaling cycle jointly.
A simple and effective invertible arbitrary rescaling network (IARN) is proposed to achieve arbitrary image rescaling by training only one model in this work.
It is shown to achieve a state-of-the-art (SOTA) performance in bidirectional arbitrary rescaling without compromising perceptual quality in LR outputs.
arXiv Detail & Related papers (2022-09-26T22:22:30Z) - RetiFluidNet: A Self-Adaptive and Multi-Attention Deep Convolutional
Network for Retinal OCT Fluid Segmentation [3.57686754209902]
Quantification of retinal fluids is necessary for OCT-guided treatment management.
New convolutional neural architecture named RetiFluidNet is proposed for multi-class retinal fluid segmentation.
Model benefits from hierarchical representation learning of textural, contextual, and edge features.
arXiv Detail & Related papers (2022-09-26T07:18:00Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Dispensed Transformer Network for Unsupervised Domain Adaptation [21.256375606219073]
A novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper.
Our proposed network achieves the best performance in comparison with several state-of-the-art techniques.
arXiv Detail & Related papers (2021-10-28T08:27:44Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - Res-CR-Net, a residual network with a novel architecture optimized for
the semantic segmentation of microscopy images [0.5363346028859919]
Res-CR-Net is a type of Deep Neural Network (DNN) that features residual blocks with either a bundle of separable atrous convolutions with different dilation rates or a convolutional LSTM.
The number of filters used in each residual block and the number of blocks are the only hyper parameters that need to be modified in order to optimize the network training for a variety of different microscopy images.
arXiv Detail & Related papers (2020-04-14T21:21:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.