Open-Source Skull Reconstruction with MONAI
- URL: http://arxiv.org/abs/2211.14051v2
- Date: Thu, 15 Jun 2023 09:37:12 GMT
- Title: Open-Source Skull Reconstruction with MONAI
- Authors: Jianning Li, Andr\'e Ferreira, Behrus Puladi, Victor Alves, Michael
Kamp, Moon-Sung Kim, Felix Nensa, Jens Kleesiek, Seyed-Ahmad Ahmadi, Jan
Egger
- Abstract summary: We present a deep learning-based approach for skull reconstruction for MONAI, which has been pre-trained on the MUG500+ skull dataset.
The primary goal of this paper lies in the investigation of open-sourcing codes and pre-trained deep learning models under the MONAI framework.
- Score: 3.245541722525715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a deep learning-based approach for skull reconstruction for MONAI,
which has been pre-trained on the MUG500+ skull dataset. The implementation
follows the MONAI contribution guidelines, hence, it can be easily tried out
and used, and extended by MONAI users. The primary goal of this paper lies in
the investigation of open-sourcing codes and pre-trained deep learning models
under the MONAI framework. Nowadays, open-sourcing software, especially
(pre-trained) deep learning models, has become increasingly important. Over the
years, medical image analysis experienced a tremendous transformation. Over a
decade ago, algorithms had to be implemented and optimized with low-level
programming languages, like C or C++, to run in a reasonable time on a desktop
PC, which was not as powerful as today's computers. Nowadays, users have
high-level scripting languages like Python, and frameworks like PyTorch and
TensorFlow, along with a sea of public code repositories at hand. As a result,
implementations that had thousands of lines of C or C++ code in the past, can
now be scripted with a few lines and in addition executed in a fraction of the
time. To put this even on a higher level, the Medical Open Network for
Artificial Intelligence (MONAI) framework tailors medical imaging research to
an even more convenient process, which can boost and push the whole field. The
MONAI framework is a freely available, community-supported, open-source and
PyTorch-based framework, that also enables to provide research contributions
with pre-trained models to others. Codes and pre-trained weights for skull
reconstruction are publicly available at:
https://github.com/Project-MONAI/research-contributions/tree/master/SkullRec
Related papers
- MALPOLON: A Framework for Deep Species Distribution Modeling [3.1457219084519004]
MALPOLON aims to facilitate training and inferences of deep species distribution models (deep-SDM)
It is written in Python and built upon the PyTorch library.
The framework is open-sourced on GitHub and PyPi.
arXiv Detail & Related papers (2024-09-26T17:45:10Z) - Does Your Neural Code Completion Model Use My Code? A Membership Inference Approach [66.51005288743153]
We investigate the legal and ethical issues of current neural code completion models.
We tailor a membership inference approach (termed CodeMI) that was originally crafted for classification tasks.
We evaluate the effectiveness of this adapted approach across a diverse array of neural code completion models.
arXiv Detail & Related papers (2024-04-22T15:54:53Z) - MONAI: An open-source framework for deep learning in healthcare [24.465436846127762]
This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare.
MonAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework.
arXiv Detail & Related papers (2022-11-04T18:35:00Z) - A modular software framework for the design and implementation of
ptychography algorithms [55.41644538483948]
We present SciCom, a new ptychography software framework aiming at simulating ptychography datasets and testing state-of-the-art reconstruction algorithms.
Despite its simplicity, the software leverages accelerated processing through the PyTorch interface.
Results are shown on both synthetic and real datasets.
arXiv Detail & Related papers (2022-05-06T16:32:37Z) - Neko: a Library for Exploring Neuromorphic Learning Rules [0.3499870393443268]
Neko is a modular library for neuromorphic learning algorithms.
It can replicate state-of-the-art algorithms and, in one case, lead to significant outperformance in accuracy and speed.
Neko is an open source Python library that supports PyTorch and backends.
arXiv Detail & Related papers (2021-05-01T18:50:32Z) - COSEA: Convolutional Code Search with Layer-wise Attention [90.35777733464354]
We propose a new deep learning architecture, COSEA, which leverages convolutional neural networks with layer-wise attention to capture the code's intrinsic structural logic.
COSEA can achieve significant improvements over state-of-the-art methods on code search tasks.
arXiv Detail & Related papers (2020-10-19T13:53:38Z) - Applications of Deep Neural Networks with Keras [0.0]
Deep learning allows a neural network to learn hierarchies of information in a way that is like the function of the human brain.
This course will introduce the student to classic neural network structures, Conversa Neural Networks (CNN), Long Short-Term Memory (LSTM), Gated Recurrent Neural Networks (GRU), General Adrial Networks (GAN)
arXiv Detail & Related papers (2020-09-11T22:09:10Z) - KSM: Fast Multiple Task Adaption via Kernel-wise Soft Mask Learning [49.77278179376902]
Deep Neural Networks (DNN) could forget the knowledge about earlier tasks when learning new tasks, and this is known as textitcatastrophic forgetting.
Recent continual learning methods are capable of alleviating the catastrophic problem on toy-sized datasets.
We propose a new training method called textit- Kernel-wise Soft Mask (KSM), which learns a kernel-wise hybrid binary and real-value soft mask for each task.
arXiv Detail & Related papers (2020-09-11T21:48:39Z) - DeepSumm -- Deep Code Summaries using Neural Transformer Architecture [8.566457170664927]
We employ neural techniques to solve the task of source code summarizing.
With supervised samples of more than 2.1m comments and code, we reduce the training time by more than 50%.
arXiv Detail & Related papers (2020-03-31T22:43:29Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.