D'ARTAGNAN: Counterfactual Video Generation
- URL: http://arxiv.org/abs/2206.01651v1
- Date: Fri, 3 Jun 2022 15:53:32 GMT
- Title: D'ARTAGNAN: Counterfactual Video Generation
- Authors: Hadrien Reynaud, Athanasios Vlontzos, Mischa Dombrowski, Ciar\'an Lee,
Arian Beqiri, Paul Leeson, Bernhard Kainz
- Abstract summary: Causally-enabled machine learning frameworks could help clinicians to identify the best course of treatments by answering counterfactual questions.
We combine deep neural networks, twin causal networks and generative adversarial methods for the first time to build D'ARTAGNAN.
We generate new ultrasound videos, retaining the video style and anatomy of the original patient, with variations of the Ejection Fraction conditioned on a given input.
- Score: 3.4079278794252232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causally-enabled machine learning frameworks could help clinicians to
identify the best course of treatments by answering counterfactual questions.
We explore this path for the case of echocardiograms by looking into the
variation of the Left Ventricle Ejection Fraction, the most essential clinical
metric gained from these examinations. We combine deep neural networks, twin
causal networks and generative adversarial methods for the first time to build
D'ARTAGNAN (Deep ARtificial Twin-Architecture GeNerAtive Networks), a novel
causal generative model. We demonstrate the soundness of our approach on a
synthetic dataset before applying it to cardiac ultrasound videos by answering
the question: "What would this echocardiogram look like if the patient had a
different ejection fraction?". To do so, we generate new ultrasound videos,
retaining the video style and anatomy of the original patient, with variations
of the Ejection Fraction conditioned on a given input. We achieve an SSIM score
of 0.79 and an R2 score of 0.51 on the counterfactual videos. Code and models
are available at https://github.com/HReynaud/dartagnan.
Related papers
- Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - Automatic Cardiac Pathology Recognition in Echocardiography Images Using Higher Order Dynamic Mode Decomposition and a Vision Transformer for Small Datasets [2.0286377328378737]
Heart diseases are the main international cause of human defunction. According to the WHO, nearly 18 million people decease each year because of heart diseases.
In this work, an automatic cardiac pathology recognition system based on a novel deep learning framework is proposed.
arXiv Detail & Related papers (2024-04-30T14:16:45Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - SimLVSeg: Simplifying Left Ventricular Segmentation in 2D+Time Echocardiograms with Self- and Weakly-Supervised Learning [0.8672882547905405]
We develop SimLVSeg, a video-based network for consistent left ventricular (LV) segmentation from sparsely annotated echocardiogram videos.
SimLVSeg consists of self-supervised pre-training with temporal masking, followed by weakly supervised learning tailored for LV segmentation from sparse annotations.
We demonstrate how SimLVSeg outperforms the state-of-the-art solutions by achieving a 93.32% dice score on the largest 2D+time echocardiography dataset.
arXiv Detail & Related papers (2023-09-30T18:13:41Z) - Hierarchical Vision Transformers for Cardiac Ejection Fraction
Estimation [0.0]
We propose a deep learning approach, based on hierarchical vision Transformers, to estimate the ejection fraction from echocardiogram videos.
The proposed method can estimate ejection fraction without the need for left ventrice segmentation first, make it more efficient than other methods.
arXiv Detail & Related papers (2023-03-31T23:42:17Z) - Feature-Conditioned Cascaded Video Diffusion Models for Precise
Echocardiogram Synthesis [5.102090025931326]
We extend elucidated diffusion models for video modelling to generate plausible video sequences from single images.
Our image to sequence approach achieves an $R2$ score of 93%, 38 points higher than recently proposed sequence to sequence generation methods.
arXiv Detail & Related papers (2023-03-22T15:26:22Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - An Algorithm for the Labeling and Interactive Visualization of the
Cerebrovascular System of Ischemic Strokes [59.116811751334225]
VirtualDSA++ is an algorithm designed to segment and label the cerebrovascular tree on CTA scans.
We extend the labeling mechanism for the cerebral arteries to identify occluded vessels.
We present the generic concept of iterative systematic search for pathways on all nodes of said model, which enables new interactive features.
arXiv Detail & Related papers (2022-04-26T14:20:26Z) - A Deep Learning Approach to Predicting Collateral Flow in Stroke
Patients Using Radiomic Features from Perfusion Images [58.17507437526425]
Collateral circulation results from specialized anastomotic channels which provide oxygenated blood to regions with compromised blood flow.
The actual grading is mostly done through manual inspection of the acquired images.
We present a deep learning approach to predicting collateral flow grading in stroke patients based on radiomic features extracted from MR perfusion data.
arXiv Detail & Related papers (2021-10-24T18:58:40Z) - Ultrasound Video Transformers for Cardiac Ejection Fraction Estimation [3.188100483042461]
We propose a novel approach to ultrasound video analysis using a Residual Auto-Encoder Network and a BERT model adapted for token classification.
We apply our model to the task of End-Systolic (ES) and End-Diastolic (ED) frame detection and the automated computation of the left ventricular ejection fraction.
Our end-to-end learnable approach can estimate the ejection fraction with a MAE of 5.95 and $R2$ of 0.52 in 0.15s per video, showing that segmentation is not the only way to predict ejection fraction.
arXiv Detail & Related papers (2021-07-02T11:23:09Z) - Neural collaborative filtering for unsupervised mitral valve
segmentation in echocardiography [60.08918310097638]
We propose an automated and unsupervised method for the mitral valve segmentation based on a low dimensional embedding of the echocardiography videos.
The method is evaluated in a collection of echocardiography videos of patients with a variety of mitral valve diseases and on an independent test cohort.
It outperforms state-of-the-art emphunsupervised and emphsupervised methods on low-quality videos or in the case of sparse annotation.
arXiv Detail & Related papers (2020-08-13T12:53:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.