OREHAS: A fully automated deep-learning pipeline for volumetric endolymphatic hydrops quantification in MRI
- URL: http://arxiv.org/abs/2601.18368v2
- Date: Thu, 29 Jan 2026 07:46:53 GMT
- Title: OREHAS: A fully automated deep-learning pipeline for volumetric endolymphatic hydrops quantification in MRI
- Authors: Caterina Fuster-Barceló, Claudia Castrillón, Laura Rodrigo-Muñoz, Victor Manuel Suárez-Vega, Nicolás Pérez-Fernández, Gorka Bastarrika, Arrate Muñoz-Barrutia,
- Abstract summary: OREHAS is the first fully automatic pipeline for volumetric quantification of endolymphatic hydrops.<n>It computes per-ear endolymphatic-to-vestibular volume ratios directly from whole MRI volumes.
- Score: 0.5947663081597012
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present OREHAS (Optimized Recognition & Evaluation of volumetric Hydrops in the Auditory System), the first fully automatic pipeline for volumetric quantification of endolymphatic hydrops (EH) from routine 3D-SPACE-MRC and 3D-REAL-IR MRI. The system integrates three components -- slice classification, inner ear localization, and sequence-specific segmentation -- into a single workflow that computes per-ear endolymphatic-to-vestibular volume ratios (ELR) directly from whole MRI volumes, eliminating the need for manual intervention. Trained with only 3 to 6 annotated slices per patient, OREHAS generalized effectively to full 3D volumes, achieving Dice scores of 0.90 for SPACE-MRC and 0.75 for REAL-IR. In an external validation cohort with complete manual annotations, OREHAS closely matched expert ground truth (VSI = 74.3%) and substantially outperformed the clinical syngo.via software (VSI = 42.5%), which tended to overestimate endolymphatic volumes. Across 19 test patients, vestibular measurements from OREHAS were consistent with syngo.via, while endolymphatic volumes were systematically smaller and more physiologically realistic. These results show that reliable and reproducible EH quantification can be achieved from standard MRI using limited supervision. By combining efficient deep-learning-based segmentation with a clinically aligned volumetric workflow, OREHAS reduces operator dependence, ensures methodological consistency. Besides, the results are compatible with established imaging protocols. The approach provides a robust foundation for large-scale studies and for recalibrating clinical diagnostic thresholds based on accurate volumetric measurements of the inner ear.
Related papers
- OmniCT: Towards a Unified Slice-Volume LVLM for Comprehensive CT Analysis [53.01523944168442]
Clinical interpretation relies on both slice-driven local features and volume-driven spatial representations.<n>Existing Large Vision-Language Models (LVLMs) remain fragmented in CT slice versus volumetric understanding.<n>We present OmniCT, a powerful unified slice-volume LVLM for CT scenarios.
arXiv Detail & Related papers (2026-02-18T00:42:41Z) - Segmentation of Ischemic Stroke Lesions using Transfer Learning on Multi-sequence MRI [0.0]
We present a novel framework for automatically segmenting ischemic stroke lesions on various MRI sequences.<n>The proposed methodology is validated on the ISLES 2015 Brain Stroke sequence dataset.<n>Our efforts culminated in achieving a Dice score of 80.5% and an accuracy of 74.03%, showcasing the efficacy of our segmentation approach.
arXiv Detail & Related papers (2025-11-10T16:27:25Z) - Automatic segmentation of colorectal liver metastases for ultrasound-based navigated resection [0.0]
Automated segmentation could enhance precision and efficiency in ultrasound-based navigation.<n>Eighty-five tracked 3D iUS volumes from 85 CRLM patients were used to train and evaluate a 3D U-Net.<n>Results: The cropped-volume model significantly outperformed the full-volume model across all metrics.
arXiv Detail & Related papers (2025-11-07T14:13:31Z) - LGE-Guided Cross-Modality Contrastive Learning for Gadolinium-Free Cardiomyopathy Screening in Cine CMR [51.11296719862485]
We propose a Contrastive Learning and Cross-Modal alignment framework for gadolinium-free cardiomyopathy screening using cine CMR sequences.<n>By aligning the latent spaces of cine CMR and Late Gadolinium Enhancement (LGE) sequences, our model encodes fibrosis-specific pathology into cine CMR embeddings.
arXiv Detail & Related papers (2025-08-23T07:21:23Z) - Automated Measurement of Optic Nerve Sheath Diameter Using Ocular Ultrasound Video [14.016658180958444]
This paper presents a novel method to automatically identify the optimal frame from video sequences for ONSD measurement.<n>The proposed method achieved a mean error, mean squared deviation, and intraclass correlation coefficient (ICC) of 0.04, 0.054, and 0.782, respectively.
arXiv Detail & Related papers (2025-06-03T12:14:51Z) - Segmenting Bi-Atrial Structures Using ResNext Based Framework [3.0838948803252904]
We propose TASSNet, a novel two-stage deep learning framework for fully automated segmentation of both left atrium (LA) and right atrium (RA)<n> TASSNet introduces two main innovations: (i) a ResNeXt-based encoder to enhance feature extraction from limited medical datasets, and (ii) a cyclical learning rate schedule to address convergence instability in highly imbalanced, small-batch 3D segmentation tasks.
arXiv Detail & Related papers (2025-02-28T10:23:12Z) - Epicardium Prompt-guided Real-time Cardiac Ultrasound Frame-to-volume Registration [50.602074919305636]
This paper introduces a lightweight end-to-end Cardiac Ultrasound frame-to-volume Registration network, termed CU-Reg.<n>We use epicardium prompt-guided anatomical clues to reinforce the interaction of 2D sparse and 3D dense features, followed by a voxel-wise local-global aggregation of enhanced features.
arXiv Detail & Related papers (2024-06-20T17:47:30Z) - Continuous max-flow augmentation of self-supervised few-shot learning on SPECT left ventricles [0.0]
This paper aims to give a recipe for diagnostic centers as well as for clinics to automatically segment the myocardium based on small and low-quality labels on reconstructed SPECT.
A combination of Continuous Max-Flow (CMF) with prior shape information is developed to augment the 3D U-Net self-supervised learning (SSL) approach on various geometries of SPECT apparatus.
arXiv Detail & Related papers (2024-05-09T03:19:19Z) - CNN-based fully automatic wrist cartilage volume quantification in MR
Image [55.41644538483948]
The U-net convolutional neural network with additional attention layers provides the best wrist cartilage segmentation performance.
The error of cartilage volume measurement should be assessed independently using a non-MRI method.
arXiv Detail & Related papers (2022-06-22T14:19:06Z) - CovidDeep: SARS-CoV-2/COVID-19 Test Based on Wearable Medical Sensors
and Efficient Neural Networks [51.589769497681175]
The novel coronavirus (SARS-CoV-2) has led to a pandemic.
The current testing regime based on Reverse Transcription-Polymerase Chain Reaction for SARS-CoV-2 has been unable to keep up with testing demands.
We propose a framework called CovidDeep that combines efficient DNNs with commercially available WMSs for pervasive testing of the virus.
arXiv Detail & Related papers (2020-07-20T21:47:28Z) - A Global Benchmark of Algorithms for Segmenting Late Gadolinium-Enhanced
Cardiac Magnetic Resonance Imaging [90.29017019187282]
" 2018 Left Atrium Challenge" using 154 3D LGE-MRIs, currently the world's largest cardiac LGE-MRI dataset.
Analyse of the submitted algorithms using technical and biological metrics was performed.
Results show the top method achieved a dice score of 93.2% and a mean surface to a surface distance of 0.7 mm.
arXiv Detail & Related papers (2020-04-26T08:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.