Going beyond p-convolutions to learn grayscale morphological operators
- URL: http://arxiv.org/abs/2102.10038v1
- Date: Fri, 19 Feb 2021 17:22:16 GMT
- Title: Going beyond p-convolutions to learn grayscale morphological operators
- Authors: Alexandre Kirszenberg, Guillaume Tochon, Elodie Puybareau and Jesus
Angulo
- Abstract summary: We present two new morphological layers based on the same principle as the p-convolutional layer.
In this work, we present two new morphological layers based on the same principle as the p-convolutional layer.
- Score: 64.38361575778237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integrating mathematical morphology operations within deep neural networks
has been subject to increasing attention lately. However, replacing standard
convolution layers with erosions or dilations is particularly challenging
because the min and max operations are not differentiable. Relying on the
asymptotic behavior of the counter-harmonic mean, p-convolutional layers were
proposed as a possible workaround to this issue since they can perform
pseudo-dilation or pseudo-erosion operations (depending on the value of their
inner parameter p), and very promising results were reported. In this work, we
present two new morphological layers based on the same principle as the
p-convolutional layer while circumventing its principal drawbacks, and
demonstrate their potential interest in further implementations within deep
convolutional neural network architectures.
Related papers
- Topological derivative approach for deep neural network architecture adaptation [0.6144680854063939]
This work presents a novel algorithm for progressively adapting neural network architecture along the depth.
We show that the optimality condition for the shape functional leads to an eigenvalue problem for deep neural architecture adaptation.
Our approach thus determines the most sensitive location along the depth where a new layer needs to be inserted.
arXiv Detail & Related papers (2025-02-08T23:01:07Z) - Compositional Generalization Across Distributional Shifts with Sparse Tree Operations [77.5742801509364]
We introduce a unified neurosymbolic architecture called the Differentiable Tree Machine.
We significantly increase the model's efficiency through the use of sparse vector representations of symbolic structures.
We enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems.
arXiv Detail & Related papers (2024-12-18T17:20:19Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - A topological description of loss surfaces based on Betti Numbers [8.539445673580252]
We provide a topological measure to evaluate loss complexity in the case of multilayer neural networks.
We find that certain variations in the loss function or model architecture, such as adding an $ell$ regularization term or skip connections in a feedforward network, do not affect loss in specific cases.
arXiv Detail & Related papers (2024-01-08T11:20:04Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Topological Data Analysis of Neural Network Layer Representations [0.0]
topological features of a simple feedforward neural network's layer representations of a modified torus with a Klein bottle-like twist were computed.
The resulting noise hampered the ability of persistent homology to compute these features.
arXiv Detail & Related papers (2022-07-01T00:51:19Z) - Non-asymptotic Excess Risk Bounds for Classification with Deep
Convolutional Neural Networks [6.051520664893158]
We consider the problem of binary classification with a class of general deep convolutional neural networks.
We define the prefactors of the risk bounds in terms of the input data dimension and other model parameters.
We show that the classification methods with CNNs can circumvent the curse of dimensionality.
arXiv Detail & Related papers (2021-05-01T15:55:04Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.