Predicting Visual Improvement after Macular Hole Surgery: a Cautionary
Tale on Deep Learning with Very Limited Data
- URL: http://arxiv.org/abs/2109.09463v1
- Date: Mon, 20 Sep 2021 12:23:04 GMT
- Title: Predicting Visual Improvement after Macular Hole Surgery: a Cautionary
Tale on Deep Learning with Very Limited Data
- Authors: M. Godbout, A. Lachance, F. Antaki, A. Dirani, A. Durand
- Abstract summary: We investigate the potential of machine learning models for the prediction of visual improvement after macular hole surgery from preoperative data.
We end up with only 121 total samples, putting our work in the very limited data regime.
We find that all tested deep vision models are outperformed by a simple regression model on the clinical features.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the potential of machine learning models for the prediction of
visual improvement after macular hole surgery from preoperative data (retinal
images and clinical features). Collecting our own data for the task, we end up
with only 121 total samples, putting our work in the very limited data regime.
We explore a variety of deep learning methods for limited data to train deep
computer vision models, finding that all tested deep vision models are
outperformed by a simple regression model on the clinical features. We believe
this is compelling evidence of the extreme difficulty of using deep learning on
very limited data.
Related papers
- Improving Deep Learning-based Automatic Cranial Defect Reconstruction by Heavy Data Augmentation: From Image Registration to Latent Diffusion Models [0.2911706166691895]
The work is a considerable contribution to the field of artificial intelligence in the automatic modeling of personalized cranial implants.
We show that the use of heavy data augmentation significantly increases both the quantitative and qualitative outcomes.
We also show that the synthetically augmented network successfully reconstructs real clinical defects.
arXiv Detail & Related papers (2024-06-10T15:34:23Z) - Meta-Transfer Derm-Diagnosis: Exploring Few-Shot Learning and Transfer Learning for Skin Disease Classification in Long-Tail Distribution [1.8024397171920885]
This study conducts a detailed examination of the benefits and drawbacks of episodic and conventional training methodologies.
With minimal labeled examples, our models showed substantial information gains and better performance compared to previously trained models.
Our experiments, ranging from 2-way to 5-way classifications with up to 10 examples, showed a growing success rate for traditional transfer learning methods.
arXiv Detail & Related papers (2024-04-25T17:56:45Z) - Data-efficient Large Vision Models through Sequential Autoregression [58.26179273091461]
We develop an efficient, autoregression-based vision model on a limited dataset.
We demonstrate how this model achieves proficiency in a spectrum of visual tasks spanning both high-level and low-level semantic understanding.
Our empirical evaluations underscore the model's agility in adapting to various tasks, heralding a significant reduction in the parameter footprint.
arXiv Detail & Related papers (2024-02-07T13:41:53Z) - Self-Supervised Pre-Training with Contrastive and Masked Autoencoder
Methods for Dealing with Small Datasets in Deep Learning for Medical Imaging [8.34398674359296]
Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis.
Training such deep learning models requires large and accurate datasets, with annotations for all training samples.
To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning.
arXiv Detail & Related papers (2023-08-12T11:31:01Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - Improved skin lesion recognition by a Self-Supervised Curricular Deep
Learning approach [0.0]
State-of-the-art deep learning approaches for skin lesion recognition often require pretraining on larger and more varied datasets.
ImageNet is often used as the pretraining dataset, but its transferring potential is hindered by the domain gap between the source dataset and the target dermatoscopic scenario.
In this work, we introduce a novel pretraining approach that sequentially trains a series of Self-Supervised Learning pretext tasks.
arXiv Detail & Related papers (2021-12-22T17:45:47Z) - Learning Predictive and Interpretable Timeseries Summaries from ICU Data [33.787187660310444]
We propose a new procedure to learn summaries of clinical time-series that are both predictive and easily understood by humans.
Our learned summaries outperform traditional interpretable model classes and achieve performance comparable to state-of-the-art deep learning models on an in-hospital mortality classification task.
arXiv Detail & Related papers (2021-09-22T21:14:05Z) - On the Robustness of Pretraining and Self-Supervision for a Deep
Learning-based Analysis of Diabetic Retinopathy [70.71457102672545]
We compare the impact of different training procedures for diabetic retinopathy grading.
We investigate different aspects such as quantitative performance, statistics of the learned feature representations, interpretability and robustness to image distortions.
Our results indicate that models from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.
arXiv Detail & Related papers (2021-06-25T08:32:45Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.