Going beyond p-convolutions to learn grayscale morphological operators
- URL: http://arxiv.org/abs/2102.10038v1
- Date: Fri, 19 Feb 2021 17:22:16 GMT
- Title: Going beyond p-convolutions to learn grayscale morphological operators
- Authors: Alexandre Kirszenberg, Guillaume Tochon, Elodie Puybareau and Jesus
Angulo
- Abstract summary: We present two new morphological layers based on the same principle as the p-convolutional layer.
In this work, we present two new morphological layers based on the same principle as the p-convolutional layer.
- Score: 64.38361575778237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Integrating mathematical morphology operations within deep neural networks
has been subject to increasing attention lately. However, replacing standard
convolution layers with erosions or dilations is particularly challenging
because the min and max operations are not differentiable. Relying on the
asymptotic behavior of the counter-harmonic mean, p-convolutional layers were
proposed as a possible workaround to this issue since they can perform
pseudo-dilation or pseudo-erosion operations (depending on the value of their
inner parameter p), and very promising results were reported. In this work, we
present two new morphological layers based on the same principle as the
p-convolutional layer while circumventing its principal drawbacks, and
demonstrate their potential interest in further implementations within deep
convolutional neural network architectures.
Related papers
- A topological description of loss surfaces based on Betti Numbers [8.539445673580252]
We provide a topological measure to evaluate loss complexity in the case of multilayer neural networks.
We find that certain variations in the loss function or model architecture, such as adding an $ell$ regularization term or skip connections in a feedforward network, do not affect loss in specific cases.
arXiv Detail & Related papers (2024-01-08T11:20:04Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Topological Data Analysis of Neural Network Layer Representations [0.0]
topological features of a simple feedforward neural network's layer representations of a modified torus with a Klein bottle-like twist were computed.
The resulting noise hampered the ability of persistent homology to compute these features.
arXiv Detail & Related papers (2022-07-01T00:51:19Z) - Non-asymptotic Excess Risk Bounds for Classification with Deep
Convolutional Neural Networks [6.051520664893158]
We consider the problem of binary classification with a class of general deep convolutional neural networks.
We define the prefactors of the risk bounds in terms of the input data dimension and other model parameters.
We show that the classification methods with CNNs can circumvent the curse of dimensionality.
arXiv Detail & Related papers (2021-05-01T15:55:04Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Scalable Partial Explainability in Neural Networks via Flexible
Activation Functions [13.71739091287644]
High dimensional features and decisions given by deep neural networks (NN) require new algorithms and methods to expose its mechanisms.
Current state-of-the-art NN interpretation methods focus more on the direct relationship between NN outputs and inputs rather than the NN structure and operations itself.
In this paper, we achieve partially explainable learning model by symbolically explaining the role of activation functions (AF) under a scalable topology.
arXiv Detail & Related papers (2020-06-10T20:30:15Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.