Connections between Operator-splitting Methods and Deep Neural Networks
with Applications in Image Segmentation
- URL: http://arxiv.org/abs/2307.09052v3
- Date: Tue, 17 Oct 2023 06:56:51 GMT
- Title: Connections between Operator-splitting Methods and Deep Neural Networks
with Applications in Image Segmentation
- Authors: Hao Liu, Xue-Cheng Tai, Raymond Chan
- Abstract summary: How to make connections between deep neural networks and mathematical algorithms is still under development.
We show an algorithmic explanation for deep neural networks, especially in their connections with operator splitting.
We propose two networks inspired by operator-splitting methods solving the Potts model.
- Score: 7.668812831777923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural network is a powerful tool for many tasks. Understanding why it
is so successful and providing a mathematical explanation is an important
problem and has been one popular research direction in past years. In the
literature of mathematical analysis of deep neural networks, a lot of works is
dedicated to establishing representation theories. How to make connections
between deep neural networks and mathematical algorithms is still under
development. In this paper, we give an algorithmic explanation for deep neural
networks, especially in their connections with operator splitting. We show that
with certain splitting strategies, operator-splitting methods have the same
structure as networks. Utilizing this connection and the Potts model for image
segmentation, two networks inspired by operator-splitting methods are proposed.
The two networks are essentially two operator-splitting algorithms solving the
Potts model. Numerical experiments are presented to demonstrate the
effectiveness of the proposed networks.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Double-well Net for Image Segmentation [10.424879461404581]
We introduce two novel deep neural network models for image segmentation known as Double-well Nets.
Drawing inspirations from the Potts model, our models leverage neural networks to represent a region force functional.
We demonstrate the performance of Double-well Nets, showcasing their superior accuracy and robustness compared to state-of-the-art neural networks.
arXiv Detail & Related papers (2023-12-31T11:16:12Z) - Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - PottsMGNet: A Mathematical Explanation of Encoder-Decoder Based Neural
Networks [7.668812831777923]
We study the encoder-decoder-based network architecture from the algorithmic perspective.
We use the two-phase Potts model for image segmentation as an example for our explanations.
We show that the resulting discrete PottsMGNet is equivalent to an encoder-decoder-based network.
arXiv Detail & Related papers (2023-07-18T07:48:48Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Feature Sharing Cooperative Network for Semantic Segmentation [10.305130700118399]
We propose a semantic segmentation method using cooperative learning.
By sharing feature maps, one of two networks can obtain the information that cannot be obtained by a single network.
The proposed method achieved better segmentation accuracy than the conventional single network and ensemble of networks.
arXiv Detail & Related papers (2021-01-20T00:22:00Z) - NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural
Network Synthesis [53.106414896248246]
We present a framework that allows analysts to effectively build the solution sub-graph space and guide the network search by injecting their domain knowledge.
Applying this technique in an iterative manner allows analysts to converge to the best performing neural network architecture for a given application.
arXiv Detail & Related papers (2020-09-28T01:48:45Z) - The Representation Theory of Neural Networks [7.724617675868718]
We show that neural networks can be represented via the mathematical theory of quiver representations.
We show that network quivers gently adapt to common neural network concepts.
We also provide a quiver representation model to understand how a neural network creates representations from the data.
arXiv Detail & Related papers (2020-07-23T19:02:14Z) - Deep Representation Learning For Multimodal Brain Networks [9.567489601729328]
We propose a novel end-to-end deep graph representation learning (Deep Multimodal Brain Networks - DMBN) to fuse multimodal brain networks.
The higher-order network mappings from brain structural networks to functional networks are learned in the node domain.
The experimental results show the superiority of the proposed method over some other state-of-the-art deep brain network models.
arXiv Detail & Related papers (2020-07-19T20:32:05Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.