Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image
Segmentation
- URL: http://arxiv.org/abs/2110.15884v1
- Date: Fri, 29 Oct 2021 16:11:25 GMT
- Title: Distributing Deep Learning Hyperparameter Tuning for 3D Medical Image
Segmentation
- Authors: Josep Lluis Berral, Oriol Aranda, Juan Luis Dominguez, Jordi Torres
- Abstract summary: Most research on novel techniques for 3D Medical Image (MIS) is currently done using Deep Learning with GPU accelerators.
The principal challenge of such technique is that a single input can easily cope computing resources, and require prohibitive amounts of time to be processed.
We present a design for distributed deep learning training pipelines, focusing on multi-node and multi- GPU environments.
- Score: 5.652813393326783
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most research on novel techniques for 3D Medical Image Segmentation (MIS) is
currently done using Deep Learning with GPU accelerators. The principal
challenge of such technique is that a single input can easily cope computing
resources, and require prohibitive amounts of time to be processed.
Distribution of deep learning and scalability over computing devices is an
actual need for progressing on such research field. Conventional distribution
of neural networks consist in data parallelism, where data is scattered over
resources (e.g., GPUs) to parallelize the training of the model. However,
experiment parallelism is also an option, where different training processes
are parallelized across resources. While the first option is much more common
on 3D image segmentation, the second provides a pipeline design with less
dependence among parallelized processes, allowing overhead reduction and more
potential scalability. In this work we present a design for distributed deep
learning training pipelines, focusing on multi-node and multi-GPU environments,
where the two different distribution approaches are deployed and benchmarked.
We take as proof of concept the 3D U-Net architecture, using the MSD Brain
Tumor Segmentation dataset, a state-of-art problem in medical image
segmentation with high computing and space requirements. Using the BSC
MareNostrum supercomputer as benchmarking environment, we use TensorFlow and
Ray as neural network training and experiment distribution platforms. We
evaluate the experiment speed-up, showing the potential for scaling out on GPUs
and nodes. Also comparing the different parallelism techniques, showing how
experiment distribution leverages better such resources through scaling.
Finally, we provide the implementation of the design open to the community, and
the non-trivial steps and methodology for adapting and deploying a MIS case as
the here presented.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Partitioned Neural Network Training via Synthetic Intermediate Labels [0.0]
GPU memory constraints have become a notable bottleneck in training such sizable models.
This study advocates partitioning the model across GPU and generating synthetic intermediate labels to train individual segments.
This approach results in a more efficient training process that minimizes data communication while maintaining model accuracy.
arXiv Detail & Related papers (2024-03-17T13:06:29Z) - Partitioning Distributed Compute Jobs with Reinforcement Learning and
Graph Neural Networks [58.720142291102135]
Large-scale machine learning models are bringing advances to a broad range of fields.
Many of these models are too large to be trained on a single machine, and must be distributed across multiple devices.
We show that maximum parallelisation is sub-optimal in relation to user-critical metrics such as throughput and blocking rate.
arXiv Detail & Related papers (2023-01-31T17:41:07Z) - Scalable Graph Convolutional Network Training on Distributed-Memory
Systems [5.169989177779801]
Graph Convolutional Networks (GCNs) are extensively utilized for deep learning on graphs.
Since the convolution operation on graphs induces irregular memory access patterns, designing a memory- and communication-efficient parallel algorithm for GCN training poses unique challenges.
We propose a highly parallel training algorithm that scales to large processor counts.
arXiv Detail & Related papers (2022-12-09T17:51:13Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - Memory-efficient Segmentation of High-resolution Volumetric MicroCT
Images [11.723370840090453]
We propose a memory-efficient network architecture for 3D high-resolution image segmentation.
The network incorporates both global and local features via a two-stage U-net-based cascaded framework.
Experiments show that it outperforms state-of-the-art 3D segmentation methods in terms of both segmentation accuracy and memory efficiency.
arXiv Detail & Related papers (2022-05-31T16:42:48Z) - Accelerating Training and Inference of Graph Neural Networks with Fast
Sampling and Pipelining [58.10436813430554]
Mini-batch training of graph neural networks (GNNs) requires a lot of computation and data movement.
We argue in favor of performing mini-batch training with neighborhood sampling in a distributed multi-GPU environment.
We present a sequence of improvements to mitigate these bottlenecks, including a performance-engineered neighborhood sampler.
We also conduct an empirical analysis that supports the use of sampling for inference, showing that test accuracies are not materially compromised.
arXiv Detail & Related papers (2021-10-16T02:41:35Z) - Benchmarking network fabrics for data distributed training of deep
neural networks [10.067102343753643]
Large computational requirements for training deep models have necessitated the development of new methods for faster training.
One such approach is the data parallel approach, where the training data is distributed across multiple compute nodes.
In this paper, we examine the effects of using different physical hardware interconnects and network-related software primitives for enabling data distributed deep learning.
arXiv Detail & Related papers (2020-08-18T17:38:30Z) - The Case for Strong Scaling in Deep Learning: Training Large 3D CNNs
with Hybrid Parallelism [3.4377970608678314]
We present scalable hybrid-parallel algorithms for training large-scale 3D convolutional neural networks.
We evaluate our proposed training algorithms with two challenging 3D CNNs, CosmoFlow and 3D U-Net.
arXiv Detail & Related papers (2020-07-25T05:06:06Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.