Discrete Morphological Neural Networks
- URL: http://arxiv.org/abs/2309.00588v2
- Date: Fri, 9 Feb 2024 01:37:49 GMT
- Title: Discrete Morphological Neural Networks
- Authors: Diego Marcondes and Junior Barrera
- Abstract summary: We propose the Discrete Morphological Neural Networks (DMNN) for binary image analysis to represent W-operators.
As a proof-of-concept, we apply the DMNN to recognize the boundary of digits with noise.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A classical approach to designing binary image operators is Mathematical
Morphology (MM). We propose the Discrete Morphological Neural Networks (DMNN)
for binary image analysis to represent W-operators and estimate them via
machine learning. A DMNN architecture, which is represented by a Morphological
Computational Graph, is designed as in the classical heuristic design of
morphological operators, in which the designer should combine a set of MM
operators and Boolean operations based on prior information and theoretical
knowledge. Then, once the architecture is fixed, instead of adjusting its
parameters (i.e., structural elements or maximal intervals) by hand, we propose
a lattice descent algorithm (LDA) to train these parameters based on a sample
of input and output images under the usual machine learning approach. We also
propose a stochastic version of the LDA that is more efficient, is scalable and
can obtain small error in practical problems. The class represented by a DMNN
can be quite general or specialized according to expected properties of the
target operator, i.e., prior information, and the semantic expressed by
algebraic properties of classes of operators is a differential relative to
other methods. The main contribution of this paper is the merger of the two
main paradigms for designing morphological operators: classical heuristic
design and automatic design via machine learning. As a proof-of-concept, we
apply the DMNN to recognize the boundary of digits with noise, and we discuss
many topics for future research.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - An Algorithm to Train Unrestricted Sequential Discrete Morphological
Neural Networks [0.0]
We propose an algorithm to learn unrestricted sequential DMNN, whose architecture is given by the composition of general W-operators.
We illustrate the algorithm in a practical example.
arXiv Detail & Related papers (2023-10-06T20:55:05Z) - The Parametric Complexity of Operator Learning [6.800286371280922]
This paper aims to prove that for general classes of operators which are characterized only by their $Cr$- or Lipschitz-regularity, operator learning suffers from a curse of parametric complexity''
The second contribution of the paper is to prove that this general curse can be overcome for solution operators defined by the Hamilton-Jacobi equation.
A novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system.
arXiv Detail & Related papers (2023-06-28T05:02:03Z) - FAENet: Frame Averaging Equivariant GNN for Materials Modeling [123.19473575281357]
We introduce a flexible framework relying on frameaveraging (SFA) to make any model E(3)-equivariant or invariant through data transformations.
We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling.
arXiv Detail & Related papers (2023-04-28T21:48:31Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - A simple probabilistic neural network for machine understanding [0.0]
We discuss probabilistic neural networks with a fixed internal representation as models for machine understanding.
We derive the internal representation by requiring that it satisfies the principles of maximal relevance and of maximal ignorance about how different features are combined.
We argue that learning machines with this architecture enjoy a number of interesting properties, like the continuity of the representation with respect to changes in parameters and data.
arXiv Detail & Related papers (2022-10-24T13:00:15Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Learning Deep Morphological Networks with Neural Architecture Search [19.731352645511052]
We propose a method based on meta-learning to incorporate morphological operators into Deep Neural Networks.
The learned architecture demonstrates how our novel morphological operations significantly increase DNN performance on various tasks.
arXiv Detail & Related papers (2021-06-14T19:19:48Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.