Data-Driven Prediction of Embryo Implantation Probability Using IVF
Time-lapse Imaging
- URL: http://arxiv.org/abs/2006.01035v2
- Date: Tue, 2 Jun 2020 14:02:44 GMT
- Title: Data-Driven Prediction of Embryo Implantation Probability Using IVF
Time-lapse Imaging
- Authors: David H. Silver, Martin Feder, Yael Gold-Zamir, Avital L. Polsky,
Shahar Rosentraub, Efrat Shachor, Adi Weinberger, Pavlo Mazur, Valery D.
Zukin, Alex M. Bronstein
- Abstract summary: We describe a novel data-driven system trained to directly predict embryo implantation probability from embryogenesis time-lapse imaging videos.
Using retrospectively collected videos from 272 embryos, we demonstrate that, when compared to an external panel of embryologists, our algorithm results in a 12% increase of positive predictive value and a 29% increase of negative predictive value.
- Score: 4.823616680520791
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The process of fertilizing a human egg outside the body in order to help
those suffering from infertility to conceive is known as in vitro fertilization
(IVF). Despite being the most effective method of assisted reproductive
technology (ART), the average success rate of IVF is a mere 20-40%. One step
that is critical to the success of the procedure is selecting which embryo to
transfer to the patient, a process typically conducted manually and without any
universally accepted and standardized criteria. In this paper we describe a
novel data-driven system trained to directly predict embryo implantation
probability from embryogenesis time-lapse imaging videos. Using retrospectively
collected videos from 272 embryos, we demonstrate that, when compared to an
external panel of embryologists, our algorithm results in a 12% increase of
positive predictive value and a 29% increase of negative predictive value.
Related papers
- Multimodal Learning for Embryo Viability Prediction in Clinical IVF [24.257300904706902]
In clinical In-Vitro Fertilization (IVF), identifying the most viable embryo for transfer is important to increasing the likelihood of a successful pregnancy.
Traditionally, this process involves embryologists manually assessing embryos' static morphological features at specific intervals using light microscopy.
This manual evaluation is not only time-intensive and costly, due to the need for expert analysis, but also inherently subjective, leading to variability in the selection process.
arXiv Detail & Related papers (2024-10-21T01:58:26Z) - Integrating Deep Learning with Fundus and Optical Coherence Tomography for Cardiovascular Disease Prediction [47.7045293755736]
Early identification of patients at risk of cardiovascular diseases (CVD) is crucial for effective preventive care, reducing healthcare burden, and improving patients' quality of life.
This study demonstrates the potential of retinal optical coherence tomography ( OCT) imaging combined with fundus photographs for identifying future adverse cardiac events.
We propose a novel binary classification network based on a Multi-channel Variational Autoencoder (MCVAE), which learns a latent embedding of patients' fundus and OCT images to classify individuals into two groups: those likely to develop CVD in the future and those who are not.
arXiv Detail & Related papers (2024-10-18T12:37:51Z) - Localizing Scan Targets from Human Pose for Autonomous Lung Ultrasound
Imaging [61.60067283680348]
With the advent of COVID-19 global pandemic, there is a need to fully automate ultrasound imaging.
We propose a vision-based, data driven method that incorporates learning-based computer vision techniques.
Our method attains an accuracy level of 15.52 (9.47) mm for probe positioning and 4.32 (3.69)deg for probe orientation, with a success rate above 80% under an error threshold of 25mm for all scan targets.
arXiv Detail & Related papers (2022-12-15T14:34:12Z) - Corneal endothelium assessment in specular microscopy images with Fuchs'
dystrophy via deep regression of signed distance maps [48.498376125522114]
This paper proposes a UNet-based segmentation approach that requires minimal post-processing.
It achieves reliable CE morphometric assessment and guttae identification across all degrees of Fuchs' dystrophy.
arXiv Detail & Related papers (2022-10-13T15:34:20Z) - A Knowledge-Based Decision Support System for In Vitro Fertilization
Treatment [21.593716703698256]
We propose a knowledge-based decision support system that can provide medical advice on the treatment protocol and medication adjustment for each patient visit during IVF treatment cycle.
Our system is efficient in data processing and light-weighted which can be easily embedded into electronic medical record systems.
arXiv Detail & Related papers (2022-01-27T20:30:52Z) - Open-Set Recognition of Breast Cancer Treatments [91.3247063132127]
Open-set recognition generalizes a classification task by classifying test samples as one of the known classes from training or "unknown"
We apply a recent existing Gaussian mixture variational autoencoder model, which achieves state-of-the-art results for image datasets, to breast cancer patient data.
Not only do we obtain more accurate and robust classification results, with a 24.5% average F1 increase compared to a recent method, but we also reexamine open-set recognition in terms of deployability to a clinical setting.
arXiv Detail & Related papers (2022-01-09T04:35:55Z) - Developmental Stage Classification of EmbryosUsing Two-Stream Neural
Network with Linear-Chain Conditional Random Field [74.53314729742966]
We propose a two-stream model for developmental stage classification.
Unlike previous methods, our two-stream model accepts both temporal and image information.
We demonstrate our algorithm on two time-lapse embryo video datasets.
arXiv Detail & Related papers (2021-07-13T19:56:01Z) - Robust and generalizable embryo selection based on artificial
intelligence and time-lapse image sequences [0.0]
We investigate how a deep learning-based embryo selection model using only time-lapse image sequences performs across different patient ages and clinical conditions.
The model was trained and evaluated based on a large dataset from 18 IVF centers consisting of 115,832 embryos.
The fully automated iDAScore v1.0 model was shown to perform at least as good as a state-of-the-art manual embryo selection model.
arXiv Detail & Related papers (2021-03-12T13:36:30Z) - Human Blastocyst Classification after In Vitro Fertilization Using Deep
Learning [0.0]
This study includes a total of 1084 images from 1226 embryos.
The images were labelled based on Veeck criteria that differentiate embryos to grade 1 to 5 based on the size of the blastomere and the grade of fragmentation.
Our best model from fine-tuning a pre-trained ResNet50 on the dataset results in 91.79% accuracy.
arXiv Detail & Related papers (2020-08-28T04:40:55Z) - Appearance Learning for Image-based Motion Estimation in Tomography [60.980769164955454]
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals.
Patient motion corrupts the geometry alignment in the reconstruction process resulting in motion artifacts.
We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object.
arXiv Detail & Related papers (2020-06-18T09:49:11Z) - Deep learning mediated single time-point image-based prediction of
embryo developmental outcome at the cleavage stage [1.6753684438635652]
Cleavage stage transfers are beneficial for patients with poor prognosis and at fertility centers in resource-limited settings.
Time-lapse imaging systems have been proposed as possible solutions, but they are cost-prohibitive and require bulky and expensive hardware.
Here, we report an automated system for classification and selection of human embryos at the cleavage stage using a trained CNN combined with a genetic algorithm.
arXiv Detail & Related papers (2020-05-21T21:21:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.