Machine Learning Method for Functional Assessment of Retinal Models
- URL: http://arxiv.org/abs/2202.02443v1
- Date: Sat, 5 Feb 2022 00:35:38 GMT
- Title: Machine Learning Method for Functional Assessment of Retinal Models
- Authors: Nikolas Papadopoulos, Nikos Melanitis, Antonio Lozano, Cristina
Soto-Sanchez, Eduardo Fernandez, Konstantina S Nikita
- Abstract summary: We introduce the functional assessment (FA) of retinal models, which describes the concept of evaluating their performance.
We present a machine learning method for FA: we feed traditional machine learning classifiers with RGC responses generated by retinal models.
We show that differences in the structure of datasets result in largely divergent performance of the retinal model.
- Score: 5.396946042201311
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Challenges in the field of retinal prostheses motivate the development of
retinal models to accurately simulate Retinal Ganglion Cells (RGCs) responses.
The goal of retinal prostheses is to enable blind individuals to solve complex,
reallife visual tasks. In this paper, we introduce the functional assessment
(FA) of retinal models, which describes the concept of evaluating the
performance of retinal models on visual understanding tasks. We present a
machine learning method for FA: we feed traditional machine learning
classifiers with RGC responses generated by retinal models, to solve object and
digit recognition tasks (CIFAR-10, MNIST, Fashion MNIST, Imagenette). We
examined critical FA aspects, including how the performance of FA depends on
the task, how to optimally feed RGC responses to the classifiers and how the
number of output neurons correlates with the model's accuracy. To increase the
number of output neurons, we manipulated input images - by splitting and then
feeding them to the retinal model and we found that image splitting does not
significantly improve the model's accuracy. We also show that differences in
the structure of datasets result in largely divergent performance of the
retinal model (MNIST and Fashion MNIST exceeded 80% accuracy, while CIFAR-10
and Imagenette achieved ~40%). Furthermore, retinal models which perform better
in standard evaluation, i.e. more accurately predict RGC response, perform
better in FA as well. However, unlike standard evaluation, FA results can be
straightforwardly interpreted in the context of comparing the quality of visual
perception.
Related papers
- Synthetic Generation of Dermatoscopic Images with GAN and Closed-Form Factorization [12.753792457271953]
We propose an innovative unsupervised augmentation solution that harnesses Generative Adversarial Network (GAN) based models.
We created synthetic images to incorporate the semantic variations and augmented the training data with these images.
We were able to increase the performance of machine learning models and set a new benchmark amongst non-ensemble based models in skin lesion classification.
arXiv Detail & Related papers (2024-10-07T15:09:50Z) - FGR-Net:Interpretable fundus imagegradeability classification based on deepreconstruction learning [4.377496499420086]
This paper presents a novel framework called FGR-Net to automatically assess and interpret underlying fundus image quality.
The FGR-Net model also provides an interpretable quality assessment through visualizations.
The experimental results showed the superiority of FGR-Net over the state-of-the-art quality assessment methods, with an accuracy of 89% and an F1-score of 87%.
arXiv Detail & Related papers (2024-09-16T12:56:23Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - Enhance Eye Disease Detection using Learnable Probabilistic Discrete Latents in Machine Learning Architectures [1.6000489723889526]
Ocular diseases, including diabetic retinopathy and glaucoma, present a significant public health challenge.
Deep learning models have emerged as powerful tools for analysing medical images, such as retina imaging.
Challenges persist in model relibability and uncertainty estimation, which are critical for clinical decision-making.
arXiv Detail & Related papers (2024-01-21T04:14:54Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology [5.164102666113966]
We conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks.
Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images.
arXiv Detail & Related papers (2022-03-01T16:14:41Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Self-Supervised Representation Learning for Detection of ACL Tear Injury
in Knee MR Videos [18.54362818156725]
We propose a self-supervised learning approach to learn transferable features from MR video clips by enforcing the model to learn anatomical features.
To the best of our knowledge, none of the supervised learning models performing injury classification task from MR video provide any explanation for the decisions made by the models.
arXiv Detail & Related papers (2020-07-15T15:35:47Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.