ivadomed: A Medical Imaging Deep Learning Toolbox
- URL: http://arxiv.org/abs/2010.09984v1
- Date: Tue, 20 Oct 2020 03:08:53 GMT
- Title: ivadomed: A Medical Imaging Deep Learning Toolbox
- Authors: Charley Gros, Andreanne Lemay, Olivier Vincent, Lucas Rouhier, Anthime
Bucquet, Joseph Paul Cohen, Julien Cohen-Adad
- Abstract summary: ivadomed is an open-source Python package for designing, end-to-end training, and evaluating deep learning models.
The package includes APIs, command-line tools, documentation, and tutorials.
- Score: 3.6064670806006647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: ivadomed is an open-source Python package for designing, end-to-end training,
and evaluating deep learning models applied to medical imaging data. The
package includes APIs, command-line tools, documentation, and tutorials.
ivadomed also includes pre-trained models such as spinal tumor segmentation and
vertebral labeling. Original features of ivadomed include a data loader that
can parse image metadata (e.g., acquisition parameters, image contrast,
resolution) and subject metadata (e.g., pathology, age, sex) for custom data
splitting or extra information during training and evaluation. Any dataset
following the Brain Imaging Data Structure (BIDS) convention will be compatible
with ivadomed without the need to manually organize the data, which is
typically a tedious task. Beyond the traditional deep learning methods,
ivadomed features cutting-edge architectures, such as FiLM and HeMis, as well
as various uncertainty estimation methods (aleatoric and epistemic), and losses
adapted to imbalanced classes and non-binary predictions. Each step is
conveniently configurable via a single file. At the same time, the code is
highly modular to allow addition/modification of an architecture or
pre/post-processing steps. Example applications of ivadomed include MRI object
detection, segmentation, and labeling of anatomical and pathological
structures. Overall, ivadomed enables easy and quick exploration of the latest
advances in deep learning for medical imaging applications. ivadomed's main
project page is available at https://ivadomed.org.
Related papers
- ContextMRI: Enhancing Compressed Sensing MRI through Metadata Conditioning [51.26601171361753]
We propose ContextMRI, a text-conditioned diffusion model for MRI that integrates granular metadata into the reconstruction process.
We show that increasing the fidelity of metadata, ranging from slice location and contrast to patient age, sex, and pathology, systematically boosts reconstruction performance.
arXiv Detail & Related papers (2025-01-08T05:15:43Z) - MIST: A Simple and Scalable End-To-End 3D Medical Imaging Segmentation Framework [1.4043931310479378]
The Medical Imaging Toolkit (MIST) is designed to facilitate consistent training, testing, and evaluation of deep learning-based medical imaging segmentation methods.
MIST standardizes data analysis, preprocessing, and evaluation pipelines, accommodating multiple architectures and loss functions.
arXiv Detail & Related papers (2024-07-31T05:17:31Z) - Medical Vision-Language Pre-Training for Brain Abnormalities [96.1408455065347]
We show how to automatically collect medical image-text aligned data for pretraining from public resources such as PubMed.
In particular, we present a pipeline that streamlines the pre-training process by initially collecting a large brain image-text dataset.
We also investigate the unique challenge of mapping subfigures to subcaptions in the medical domain.
arXiv Detail & Related papers (2024-04-27T05:03:42Z) - LiteNeXt: A Novel Lightweight ConvMixer-based Model with Self-embedding Representation Parallel for Medical Image Segmentation [2.0901574458380403]
We propose a new lightweight but efficient model, namely LiteNeXt, for medical image segmentation.
LiteNeXt is trained from scratch with small amount of parameters (0.71M) and Giga Floating Point Operations Per Second (0.42).
arXiv Detail & Related papers (2024-04-04T01:59:19Z) - Modular Deep Active Learning Framework for Image Annotation: A Technical Report for the Ophthalmo-AI Project [1.7325492987380366]
We introduce MedDeepCyleAL, an end-to-end framework implementing the complete Active Learning cycle.
It provides researchers with the flexibility to choose the type of deep learning model they wish to employ.
While MedDeepCyleAL can be applied to any kind of image data, we have specifically applied it to ophthalmology data in this project.
arXiv Detail & Related papers (2024-03-22T11:53:03Z) - Understanding the Tricks of Deep Learning in Medical Image Segmentation:
Challenges and Future Directions [66.40971096248946]
In this paper, we collect a series of MedISeg tricks for different model implementation phases.
We experimentally explore the effectiveness of these tricks on consistent baselines.
We also open-sourced a strong MedISeg repository, where each component has the advantage of plug-and-play.
arXiv Detail & Related papers (2022-09-21T12:30:05Z) - ClamNet: Using contrastive learning with variable depth Unets for
medical image segmentation [0.0]
Unets have become the standard method for semantic segmentation of medical images, along with fully convolutional networks (FCN)
Unet++ was introduced as a variant of Unet, in order to solve some of the problems facing Unet and FCNs.
We use contrastive learning to train Unet++ for semantic segmentation of medical images using medical images from various sources.
arXiv Detail & Related papers (2022-06-10T16:55:45Z) - MetaMedSeg: Volumetric Meta-learning for Few-Shot Organ Segmentation [47.428577772279176]
We present MetaMedSeg, a gradient-based meta-learning algorithm that redefines the meta-learning task for the volumetric medical data.
In the experiments, we present an evaluation of the medical decathlon dataset by extracting 2D slices from CT and MRI volumes of different organs.
Our proposed volumetric task definition leads to up to 30% improvement in terms of IoU compared to related baselines.
arXiv Detail & Related papers (2021-09-18T11:13:45Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.