RetiNerveNet: Using Recursive Deep Learning to Estimate Pointwise 24-2
Visual Field Data based on Retinal Structure
- URL: http://arxiv.org/abs/2010.07488v2
- Date: Sun, 20 Jun 2021 00:00:13 GMT
- Title: RetiNerveNet: Using Recursive Deep Learning to Estimate Pointwise 24-2
Visual Field Data based on Retinal Structure
- Authors: Shounak Datta and Eduardo B. Mariottoni and David Dov and Alessandro
A. Jammal and Lawrence Carin and Felipe A. Medeiros
- Abstract summary: glaucoma is the leading cause of irreversible blindness in the world, affecting over 70 million people.
Due to the Standard Automated Perimetry (SAP) test's innate difficulty and its high test-retest variability, we propose the RetiNerveNet.
- Score: 109.33721060718392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Glaucoma is the leading cause of irreversible blindness in the world,
affecting over 70 million people. The cumbersome Standard Automated Perimetry
(SAP) test is most frequently used to detect visual loss due to glaucoma. Due
to the SAP test's innate difficulty and its high test-retest variability, we
propose the RetiNerveNet, a deep convolutional recursive neural network for
obtaining estimates of the SAP visual field. RetiNerveNet uses information from
the more objective Spectral-Domain Optical Coherence Tomography (SDOCT).
RetiNerveNet attempts to trace-back the arcuate convergence of the retinal
nerve fibers, starting from the Retinal Nerve Fiber Layer (RNFL) thickness
around the optic disc, to estimate individual age-corrected 24-2 SAP values.
Recursive passes through the proposed network sequentially yield estimates of
the visual locations progressively farther from the optic disc. While all the
methods used for our experiments exhibit lower performance for the advanced
disease group, the proposed network is observed to be more accurate than all
the baselines for estimating the individual visual field values. We further
augment RetiNerveNet to additionally predict the SAP Mean Deviation values and
also create an ensemble of RetiNerveNets that further improves the performance,
by increasingly weighting-up underrepresented parts of the training data.
Related papers
- FS-Net: Full Scale Network and Adaptive Threshold for Improving
Extraction of Micro-Retinal Vessel Structures [4.776514178760067]
We propose a full-scale micro-vessel extraction mechanism based on an encoder-decoder neural network architecture.
The proposed solution has been evaluated using the DRIVE, CHASE-DB1, and STARE datasets.
arXiv Detail & Related papers (2023-11-14T10:32:17Z) - Optimization dependent generalization bound for ReLU networks based on
sensitivity in the tangent bundle [0.0]
We propose a PAC type bound on the generalization error of feedforward ReLU networks.
The obtained bound does not explicitly depend on the depth of the network.
arXiv Detail & Related papers (2023-10-26T13:14:13Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - EndoDepthL: Lightweight Endoscopic Monocular Depth Estimation with
CNN-Transformer [0.0]
We propose a novel lightweight solution named EndoDepthL that integrates CNN and Transformers to predict multi-scale depth maps.
Our approach includes optimizing the network architecture, incorporating multi-scale dilated convolution, and a multi-channel attention mechanism.
To better evaluate the performance of monocular depth estimation in endoscopic imaging, we propose a novel complexity evaluation metric.
arXiv Detail & Related papers (2023-08-04T21:38:29Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - SalFBNet: Learning Pseudo-Saliency Distribution via Feedback
Convolutional Networks [8.195696498474579]
We propose a feedback-recursive convolutional framework (SalFBNet) for saliency detection.
We create a large-scale Pseudo-Saliency dataset to alleviate the problem of data deficiency in saliency detection.
arXiv Detail & Related papers (2021-12-07T14:39:45Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - CPFN: Cascaded Primitive Fitting Networks for High-Resolution Point
Clouds [51.47100091540298]
We present Cascaded Primitive Fitting Networks (CPFN) that relies on an adaptive patch sampling network to assemble detection results of global and local primitive detection networks.
CPFN improves the state-of-the-art SPFN performance by 13-14% on high-resolution point cloud datasets and specifically improves the detection of fine-scale primitives by 20-22%.
arXiv Detail & Related papers (2021-08-31T23:27:33Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Variational Depth Search in ResNets [2.6763498831034043]
One-shot neural architecture search allows joint learning of weights and network architecture, reducing computational cost.
We limit our search space to the depth of residual networks and formulate an analytically tractable variational objective that allows for an unbiased approximate posterior over depths in one-shot.
We compare our proposed method against manual search over network depths on the MNIST, Fashion-MNIST, SVHN datasets.
arXiv Detail & Related papers (2020-02-06T16:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.