Topology and Intersection-Union Constrained Loss Function for Multi-Region Anatomical Segmentation in Ocular Images
- URL: http://arxiv.org/abs/2411.00560v1
- Date: Fri, 01 Nov 2024 13:17:18 GMT
- Title: Topology and Intersection-Union Constrained Loss Function for Multi-Region Anatomical Segmentation in Ocular Images
- Authors: Ruiyu Xia, Jianqiang Li, Xi Xu, Guanghui Fu,
- Abstract summary: Ocular Myasthenia Gravis (OMG) is a rare and challenging disease to detect in its early stages.
No publicly available dataset and tools currently exist for this purpose.
We propose a new topology and intersection-union constrained loss function (TIU loss) that improves performance using small training datasets.
- Score: 5.628938375586146
- License:
- Abstract: Ocular Myasthenia Gravis (OMG) is a rare and challenging disease to detect in its early stages, but symptoms often first appear in the eye muscles, such as drooping eyelids and double vision. Ocular images can be used for early diagnosis by segmenting different regions, such as the sclera, iris, and pupil, which allows for the calculation of area ratios to support accurate medical assessments. However, no publicly available dataset and tools currently exist for this purpose. To address this, we propose a new topology and intersection-union constrained loss function (TIU loss) that improves performance using small training datasets. We conducted experiments on a public dataset consisting of 55 subjects and 2,197 images. Our proposed method outperformed two widely used loss functions across three deep learning networks, achieving a mean Dice score of 83.12% [82.47%, 83.81%] with a 95% bootstrap confidence interval. In a low-percentage training scenario (10% of the training data), our approach showed an 8.32% improvement in Dice score compared to the baseline. Additionally, we evaluated the method in a clinical setting with 47 subjects and 501 images, achieving a Dice score of 64.44% [63.22%, 65.62%]. We did observe some bias when applying the model in clinical settings. These results demonstrate that the proposed method is accurate, and our code along with the trained model is publicly available.
Related papers
- Detection of Intracranial Hemorrhage for Trauma Patients [1.0074894923170512]
We propose a novel Voxel-Complete IoU (VC-IoU) loss that encourages the network to learn the 3D aspect ratios of bounding boxes.
We extensively experiment on brain bleeding detection using a publicly available dataset, and validate it on a private cohort.
arXiv Detail & Related papers (2024-08-20T12:03:59Z) - A Federated Learning Framework for Stenosis Detection [70.27581181445329]
This study explores the use of Federated Learning (FL) for stenosis detection in coronary angiography images (CA)
Two heterogeneous datasets from two institutions were considered: dataset 1 includes 1219 images from 200 patients, which we acquired at the Ospedale Riuniti of Ancona (Italy)
dataset 2 includes 7492 sequential images from 90 patients from a previous study available in the literature.
arXiv Detail & Related papers (2023-10-30T11:13:40Z) - TRUSTED: The Paired 3D Transabdominal Ultrasound and CT Human Data for
Kidney Segmentation and Registration Research [42.90853857929316]
Inter-modal image registration (IMIR) and image segmentation with abdominal Ultrasound (US) data has many important clinical applications.
We propose TRUSTED (the Tridimensional Ultra Sound TomodEnsitometrie dataset), comprising paired transabdominal 3DUS and CT kidney images from 48 human patients.
arXiv Detail & Related papers (2023-10-19T11:09:50Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - ACAT: Adversarial Counterfactual Attention for Classification and
Detection in Medical Imaging [41.202147558260336]
We propose a framework that employs saliency maps to obtain soft spatial attention masks that modulate the image features at different scales.
ACAT increases the baseline classification accuracy of lesions in brain CT scans from 71.39% to 72.55% and of COVID-19 related findings in lung CT scans from 67.71% to 70.84%.
arXiv Detail & Related papers (2023-03-27T17:43:57Z) - Acute ischemic stroke lesion segmentation in non-contrast CT images
using 3D convolutional neural networks [0.0]
We propose an automatic algorithm aimed at volumetric segmentation of acute ischemic stroke lesion in non-contrast computed tomography brain 3D images.
Our deep-learning approach is based on the popular 3D U-Net convolutional neural network architecture.
arXiv Detail & Related papers (2023-01-17T10:39:39Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - Systematic Clinical Evaluation of A Deep Learning Method for Medical
Image Segmentation: Radiosurgery Application [48.89674088331313]
We systematically evaluate a Deep Learning (DL) method in a 3D medical image segmentation task.
Our method is integrated into the radiosurgery treatment process and directly impacts the clinical workflow.
arXiv Detail & Related papers (2021-08-21T16:15:40Z) - A Deep Learning-Based Approach to Extracting Periosteal and Endosteal
Contours of Proximal Femur in Quantitative CT Images [25.76523855274612]
A three-dimensional (3D) end-to-end fully convolutional neural network was developed for our segmentation task.
Two models with the same network structures were trained and they achieved a dice similarity coefficient (DSC) of 97.87% and 96.49% for the periosteal and endosteal contours, respectively.
It demonstrated a strong potential for clinical use, including the hip fracture risk prediction and finite element analysis.
arXiv Detail & Related papers (2021-02-03T10:23:41Z) - Chest x-ray automated triage: a semiologic approach designed for
clinical implementation, exploiting different types of labels through a
combination of four Deep Learning architectures [83.48996461770017]
This work presents a Deep Learning method based on the late fusion of different convolutional architectures.
We built four training datasets combining images from public chest x-ray datasets and our institutional archive.
We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool.
arXiv Detail & Related papers (2020-12-23T14:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.