Fed-Sim: Federated Simulation for Medical Imaging
- URL: http://arxiv.org/abs/2009.00668v1
- Date: Tue, 1 Sep 2020 19:17:46 GMT
- Title: Fed-Sim: Federated Simulation for Medical Imaging
- Authors: Daiqing Li, Amlan Kar, Nishant Ravikumar, Alejandro F Frangi, Sanja
Fidler
- Abstract summary: We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
- Score: 131.56325440976207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Labelling data is expensive and time consuming especially for domains such as
medical imaging that contain volumetric imaging data and require expert
knowledge. Exploiting a larger pool of labeled data available across multiple
centers, such as in federated learning, has also seen limited success since
current deep learning approaches do not generalize well to images acquired with
scanners from different manufacturers. We aim to address these problems in a
common, learning-based image simulation framework which we refer to as
Federated Simulation. We introduce a physics-driven generative approach that
consists of two learnable neural modules: 1) a module that synthesizes 3D
cardiac shapes along with their materials, and 2) a CT simulator that renders
these into realistic 3D CT Volumes, with annotations. Since the model of
geometry and material is disentangled from the imaging sensor, it can
effectively be trained across multiple medical centers. We show that our data
synthesis framework improves the downstream segmentation performance on several
datasets. Project Page: https://nv-tlabs.github.io/fed-sim/ .
Related papers
- fMRI-3D: A Comprehensive Dataset for Enhancing fMRI-based 3D Reconstruction [50.534007259536715]
We present the fMRI-3D dataset, which includes data from 15 participants and showcases a total of 4768 3D objects.
We propose MinD-3D, a novel framework designed to decode 3D visual information from fMRI signals.
arXiv Detail & Related papers (2024-09-17T16:13:59Z) - μ-Net: A Deep Learning-Based Architecture for μ-CT Segmentation [2.012378666405002]
X-ray computed microtomography (mu-CT) is a non-destructive technique that can generate high-resolution 3D images of the internal anatomy of medical and biological samples.
extracting relevant information from 3D images requires semantic segmentation of the regions of interest.
We propose a novel framework that uses a convolutional neural network (CNN) to automatically segment the full morphology of the heart of Carassius auratus.
arXiv Detail & Related papers (2024-06-24T15:29:08Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Promise:Prompt-driven 3D Medical Image Segmentation Using Pretrained
Image Foundation Models [13.08275555017179]
We propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt.
We evaluate our model on two public datasets for colon and pancreas tumor segmentations.
arXiv Detail & Related papers (2023-10-30T16:49:03Z) - CMRxRecon: An open cardiac MRI dataset for the competition of
accelerated image reconstruction [62.61209705638161]
There has been growing interest in deep learning-based CMR imaging algorithms.
Deep learning methods require large training datasets.
This dataset includes multi-contrast, multi-view, multi-slice and multi-coil CMR imaging data from 300 subjects.
arXiv Detail & Related papers (2023-09-19T15:14:42Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Attentive Symmetric Autoencoder for Brain MRI Segmentation [56.02577247523737]
We propose a novel Attentive Symmetric Auto-encoder based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks.
In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics.
Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models.
arXiv Detail & Related papers (2022-09-19T09:43:19Z) - Compound Figure Separation of Biomedical Images: Mining Large Datasets
for Self-supervised Learning [12.445324044675116]
We introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations.
We also propose a new side loss that is optimized for compound figure separation.
This is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation.
arXiv Detail & Related papers (2022-08-30T16:02:34Z) - Super Images -- A New 2D Perspective on 3D Medical Imaging Analysis [0.0]
We present a simple yet effective 2D method to handle 3D data while efficiently embedding the 3D knowledge during training.
Our method generates a super-resolution image by stitching slices side by side in the 3D image.
While attaining equal, if not superior, results to 3D networks utilizing only 2D counterparts, the model complexity is reduced by around threefold.
arXiv Detail & Related papers (2022-05-05T09:59:03Z) - Ground material classification and for UAV-based photogrammetric 3D data
A 2D-3D Hybrid Approach [1.3359609092684614]
In recent years, photogrammetry has been widely used in many areas to create 3D virtual data representing the physical environment.
These cutting-edge technologies have caught the US Army and Navy's attention for the purpose of rapid 3D battlefield reconstruction, virtual training, and simulations.
arXiv Detail & Related papers (2021-09-24T22:29:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.