An End-to-End learnable Flow Regularized Model for Brain Tumor
Segmentation
- URL: http://arxiv.org/abs/2109.00622v1
- Date: Wed, 1 Sep 2021 21:34:30 GMT
- Title: An End-to-End learnable Flow Regularized Model for Brain Tumor
Segmentation
- Authors: Yan Shen, Zhanghexuan Ji, Mingchen Gao
- Abstract summary: We propose to incorporate end-to-end trainable neural network features into the energy functions.
Our deep neural network features are extracted from the down-sampling and up-sampling layers with skip-connections of a U-net.
And the segmentations are solved in a primal-dual form by ADMM solvers.
- Score: 1.253312107729806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many segmentation tasks for biomedical images can be modeled as the
minimization of an energy function and solved by a class of max-flow and
min-cut optimization algorithms. However, the segmentation accuracy is
sensitive to the contrasting of semantic features of different segmenting
objects, as the traditional energy function usually uses hand-crafted features
in their energy functions. To address these limitations, we propose to
incorporate end-to-end trainable neural network features into the energy
functions. Our deep neural network features are extracted from the
down-sampling and up-sampling layers with skip-connections of a U-net. In the
inference stage, the learned features are fed into the energy functions. And
the segmentations are solved in a primal-dual form by ADMM solvers. In the
training stage, we train our neural networks by optimizing the energy function
in the primal form with regularizations on the min-cut and flow-conservation
functions, which are derived from the optimal conditions in the dual form. We
evaluate our methods, both qualitatively and quantitatively, in a brain tumor
segmentation task. As the energy minimization model achieves a balance on
sensitivity and smooth boundaries, we would show how our segmentation contours
evolve actively through iterations as ensemble references for doctor diagnosis.
Related papers
- Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - The limitation of neural nets for approximation and optimization [0.0]
We are interested in assessing the use of neural networks as surrogate models to approximate and minimize objective functions in optimization problems.
Our study begins by determining the best activation function for approximating the objective functions of popular nonlinear optimization test problems.
arXiv Detail & Related papers (2023-11-21T00:21:15Z) - ENN: A Neural Network with DCT Adaptive Activation Functions [2.2713084727838115]
We present Expressive Neural Network (ENN), a novel model in which the non-linear activation functions are modeled using the Discrete Cosine Transform (DCT)
This parametrization keeps the number of trainable parameters low, is appropriate for gradient-based schemes, and adapts to different learning tasks.
The performance of ENN outperforms state of the art benchmarks, providing above a 40% gap in accuracy in some scenarios.
arXiv Detail & Related papers (2023-07-02T21:46:30Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Learning from partially labeled data for multi-organ and tumor
segmentation [102.55303521877933]
We propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple datasets.
A dynamic head enables the network to accomplish multiple segmentation tasks flexibly.
We create a large-scale partially labeled Multi-Organ and Tumor benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors.
arXiv Detail & Related papers (2022-11-13T13:03:09Z) - Consensus Function from an $L_p^q-$norm Regularization Term for its Use
as Adaptive Activation Functions in Neural Networks [0.0]
We propose the definition and utilization of an implicit, parametric, non-linear activation function that adapts its shape during the training process.
This fact increases the space of parameters to optimize within the network, but it allows a greater flexibility and generalizes the concept of neural networks.
Preliminary results show that the use of these neural networks with this type of adaptive activation functions reduces the error in regression and classification examples.
arXiv Detail & Related papers (2022-06-30T04:48:14Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.