Double-well Net for Image Segmentation
- URL: http://arxiv.org/abs/2401.00456v2
- Date: Sun, 28 Jul 2024 08:40:34 GMT
- Title: Double-well Net for Image Segmentation
- Authors: Hao Liu, Jun Liu, Raymond H. Chan, Xue-Cheng Tai,
- Abstract summary: We introduce two novel deep neural network models for image segmentation known as Double-well Nets.
Drawing inspirations from the Potts model, our models leverage neural networks to represent a region force functional.
We demonstrate the performance of Double-well Nets, showcasing their superior accuracy and robustness compared to state-of-the-art neural networks.
- Score: 10.424879461404581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, our goal is to integrate classical mathematical models with deep neural networks by introducing two novel deep neural network models for image segmentation known as Double-well Nets. Drawing inspirations from the Potts model, our models leverage neural networks to represent a region force functional. We extend the well-know MBO (Merriman-Bence-Osher) scheme to solve the Potts model. The widely recognized Potts model is approximated using a double-well potential and then solved by an operator-splitting method, which turns out to be an extension of the well-known MBO scheme. Subsequently, we replace the region force functional in the Potts model with a UNet-type network, which is data-driven and is designed to capture multiscale features of images, and also introduce control variables to enhance effectiveness. The resulting algorithm is a neural network activated by a function that minimizes the double-well potential. What sets our proposed Double-well Nets apart from many existing deep learning methods for image segmentation is their strong mathematical foundation. They are derived from the network approximation theory and employ the MBO scheme to approximately solve the Potts model. By incorporating mathematical principles, Double-well Nets bridge the MBO scheme and neural networks, and offer an alternative perspective for designing networks with mathematical backgrounds. Through comprehensive experiments, we demonstrate the performance of Double-well Nets, showcasing their superior accuracy and robustness compared to state-of-the-art neural networks. Overall, our work represents a valuable contribution to the field of image segmentation by combining the strengths of classical variational models and deep neural networks. The Double-well Nets introduce an innovative approach that leverages mathematical foundations to enhance segmentation performance.
Related papers
- Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Connections between Operator-splitting Methods and Deep Neural Networks
with Applications in Image Segmentation [7.668812831777923]
How to make connections between deep neural networks and mathematical algorithms is still under development.
We show an algorithmic explanation for deep neural networks, especially in their connections with operator splitting.
We propose two networks inspired by operator-splitting methods solving the Potts model.
arXiv Detail & Related papers (2023-07-18T08:06:14Z) - Exploring the Approximation Capabilities of Multiplicative Neural
Networks for Smooth Functions [9.936974568429173]
We consider two classes of target functions: generalized bandlimited functions and Sobolev-Type balls.
Our results demonstrate that multiplicative neural networks can approximate these functions with significantly fewer layers and neurons.
These findings suggest that multiplicative gates can outperform standard feed-forward layers and have potential for improving neural network design.
arXiv Detail & Related papers (2023-01-11T17:57:33Z) - Deep Image Clustering with Contrastive Learning and Multi-scale Graph
Convolutional Networks [58.868899595936476]
This paper presents a new deep clustering approach termed image clustering with contrastive learning and multi-scale graph convolutional networks (IcicleGCN)
Experiments on multiple image datasets demonstrate the superior clustering performance of IcicleGCN over the state-of-the-art.
arXiv Detail & Related papers (2022-07-14T19:16:56Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Segmenting two-dimensional structures with strided tensor networks [1.952097552284465]
We propose a novel formulation of tensor networks for supervised image segmentation.
The proposed model is end-to-end trainable using backpropagation.
The evaluation shows that the strided tensor network yields competitive performance compared to CNN-based models.
arXiv Detail & Related papers (2021-02-13T11:06:34Z) - On Resource-Efficient Bayesian Network Classifiers and Deep Neural
Networks [14.540226579203207]
We present two methods to reduce the complexity of Bayesian network (BN) classifiers.
First, we introduce quantization-aware training using the straight-through gradient estimator to quantize the parameters of BNs to few bits.
Second, we extend a recently proposed differentiable tree-augmented naive Bayes (TAN) structure learning approach by also considering the model size.
arXiv Detail & Related papers (2020-10-22T14:47:55Z) - Tensor Networks for Medical Image Classification [0.456877715768796]
We focus on the class of Networks, which has been a work horse for physicists in the last two decades to analyse quantum many-body systems.
We extend the Matrix Product State tensor networks to be useful in medical image analysis tasks.
We show that tensor networks are capable of attaining performance that is comparable to state-of-the-art deep learning methods.
arXiv Detail & Related papers (2020-04-21T15:02:58Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.