OLIVES Dataset: Ophthalmic Labels for Investigating Visual Eye Semantics
- URL: http://arxiv.org/abs/2209.11195v1
- Date: Thu, 22 Sep 2022 17:36:40 GMT
- Title: OLIVES Dataset: Ophthalmic Labels for Investigating Visual Eye Semantics
- Authors: Mohit Prabhushankar, Kiran Kokilepersaud, Yash-yee Logan, Stephanie
Trejo Corona, Ghassan AlRegib, and Charles Wykoff
- Abstract summary: We introduce the Ophthalmic Labels for Investigating Visual Eye Semantics (OLIVES) dataset.
This is the first OCT and near-IR fundus dataset that includes clinical labels, biomarker labels, disease labels, and time-series patient treatment information.
There are 96 eyes' data averaged over a period of at least two years with each eye treated for an average of 66 weeks and 7 injections.
- Score: 11.343658407664918
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Clinical diagnosis of the eye is performed over multifarious data modalities
including scalar clinical labels, vectorized biomarkers, two-dimensional fundus
images, and three-dimensional Optical Coherence Tomography (OCT) scans.
Clinical practitioners use all available data modalities for diagnosing and
treating eye diseases like Diabetic Retinopathy (DR) or Diabetic Macular Edema
(DME). Enabling usage of machine learning algorithms within the ophthalmic
medical domain requires research into the relationships and interactions
between all relevant data over a treatment period. Existing datasets are
limited in that they neither provide data nor consider the explicit
relationship modeling between the data modalities. In this paper, we introduce
the Ophthalmic Labels for Investigating Visual Eye Semantics (OLIVES) dataset
that addresses the above limitation. This is the first OCT and near-IR fundus
dataset that includes clinical labels, biomarker labels, disease labels, and
time-series patient treatment information from associated clinical trials. The
dataset consists of 1268 near-IR fundus images each with at least 49 OCT scans,
and 16 biomarkers, along with 4 clinical labels and a disease diagnosis of DR
or DME. In total, there are 96 eyes' data averaged over a period of at least
two years with each eye treated for an average of 66 weeks and 7 injections. We
benchmark the utility of OLIVES dataset for ophthalmic data as well as provide
benchmarks and concrete research directions for core and emerging machine
learning paradigms within medical image analysis.
Related papers
- Clinical Evaluation of Medical Image Synthesis: A Case Study in Wireless Capsule Endoscopy [63.39037092484374]
This study focuses on the clinical evaluation of medical Synthetic Data Generation using Artificial Intelligence (AI) models.
The paper contributes by a) presenting a protocol for the systematic evaluation of synthetic images by medical experts and b) applying it to assess TIDE-II, a novel variational autoencoder-based model for high-resolution WCE image synthesis.
The results show that TIDE-II generates clinically relevant WCE images, helping to address data scarcity and enhance diagnostic tools.
arXiv Detail & Related papers (2024-10-31T19:48:50Z) - A Labeled Ophthalmic Ultrasound Dataset with Medical Report Generation Based on Cross-modal Deep Learning [8.733721267033705]
We present a labeled ophthalmic dataset for the precise analysis and the automated exploration of medical images along with their associated reports.
It collects three modal data, including the ultrasound images, blood flow information and examination reports from 2,417 patients at an ophthalmology hospital in Shenyang, China.
arXiv Detail & Related papers (2024-07-26T11:03:18Z) - Eye-gaze Guided Multi-modal Alignment for Medical Representation Learning [65.54680361074882]
Eye-gaze Guided Multi-modal Alignment (EGMA) framework harnesses eye-gaze data for better alignment of medical visual and textual features.
We conduct downstream tasks of image classification and image-text retrieval on four medical datasets.
arXiv Detail & Related papers (2024-03-19T03:59:14Z) - OCTDL: Optical Coherence Tomography Dataset for Image-Based Deep Learning Methods [34.13887472397715]
This work presents an open-access OCT dataset ( OCTDL) comprising over 2000 OCT images labeled according to disease group and retinal pathology.
The dataset consists of OCT records of patients with Age-related Macular Degeneration (AMD), Diabetic Macular Edema (DME), Epiretinal Membrane (ERM), Retinal Artery Occlusion (RAO), Retinal Vein OcclusionRVO, and Vitreomacular Interface Disease (VID)
arXiv Detail & Related papers (2023-12-13T16:18:40Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Clinically Labeled Contrastive Learning for OCT Biomarker Classification [12.633032175875865]
This paper presents a novel strategy for contrastive learning of medical images based on labels that can be extracted from clinical data.
We exploit this relationship by using the clinical data as pseudo-labels for our data without biomarker labels.
We show performance improvements by as much as 5% in total biomarker detection AUROC.
arXiv Detail & Related papers (2023-05-24T13:51:48Z) - Clinical Contrastive Learning for Biomarker Detection [15.510581400494207]
We exploit the relationship between clinical and biomarker data to improve performance for biomarker classification.
This is accomplished by leveraging the larger amount of clinical data as pseudo-labels for our data without biomarker labels.
Our method is shown to outperform state of the art self-supervised methods by as much as 5% in terms of accuracy on individual biomarker detection.
arXiv Detail & Related papers (2022-11-09T18:29:56Z) - A Benchmark for Studying Diabetic Retinopathy: Segmentation, Grading,
and Transferability [76.64661091980531]
People with diabetes are at risk of developing diabetic retinopathy (DR)
Computer-aided DR diagnosis is a promising tool for early detection of DR and severity grading.
This dataset has 1,842 images with pixel-level DR-related lesion annotations, and 1,000 images with image-level labels graded by six board-certified ophthalmologists.
arXiv Detail & Related papers (2020-08-22T07:48:04Z) - Co-Heterogeneous and Adaptive Segmentation from Multi-Source and
Multi-Phase CT Imaging Data: A Study on Pathological Liver and Lesion
Segmentation [48.504790189796836]
We present a novel segmentation strategy, co-heterogenous and adaptive segmentation (CHASe)
We propose a versatile framework that fuses appearance based semi-supervision, mask based adversarial domain adaptation, and pseudo-labeling.
CHASe can further improve pathological liver mask Dice-Sorensen coefficients by ranges of $4.2% sim 9.4%$.
arXiv Detail & Related papers (2020-05-27T06:58:39Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.