Learning Deep Morphological Networks with Neural Architecture Search
- URL: http://arxiv.org/abs/2106.07714v1
- Date: Mon, 14 Jun 2021 19:19:48 GMT
- Title: Learning Deep Morphological Networks with Neural Architecture Search
- Authors: Yufei Hu, Nacim Belkhir, Jesus Angulo, Angela Yao, Gianni Franchi
- Abstract summary: We propose a method based on meta-learning to incorporate morphological operators into Deep Neural Networks.
The learned architecture demonstrates how our novel morphological operations significantly increase DNN performance on various tasks.
- Score: 19.731352645511052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks (DNNs) are generated by sequentially performing linear
and non-linear processes. Using a combination of linear and non-linear
procedures is critical for generating a sufficiently deep feature space. The
majority of non-linear operators are derivations of activation functions or
pooling functions. Mathematical morphology is a branch of mathematics that
provides non-linear operators for a variety of image processing problems. We
investigate the utility of integrating these operations in an end-to-end deep
learning framework in this paper. DNNs are designed to acquire a realistic
representation for a particular job. Morphological operators give topological
descriptors that convey salient information about the shapes of objects
depicted in images. We propose a method based on meta-learning to incorporate
morphological operators into DNNs. The learned architecture demonstrates how
our novel morphological operations significantly increase DNN performance on
various tasks, including picture classification and edge detection.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Deep Learning as Ricci Flow [38.27936710747996]
Deep neural networks (DNNs) are powerful tools for approximating the distribution of complex data.
We show that the transformations performed by DNNs during classification tasks have parallels to those expected under Hamilton's Ricci flow.
Our findings motivate the use of tools from differential and discrete geometry to the problem of explainability in deep learning.
arXiv Detail & Related papers (2024-04-22T15:12:47Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - An Algorithm to Train Unrestricted Sequential Discrete Morphological
Neural Networks [0.0]
We propose an algorithm to learn unrestricted sequential DMNN, whose architecture is given by the composition of general W-operators.
We illustrate the algorithm in a practical example.
arXiv Detail & Related papers (2023-10-06T20:55:05Z) - When Deep Learning Meets Polyhedral Theory: A Survey [6.899761345257773]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.
Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - Experimental Observations of the Topology of Convolutional Neural
Network Activations [2.4235626091331737]
Topological data analysis provides compact, noise-robust representations of complex structures.
Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture.
In this paper, we apply cutting edge techniques from TDA with the goal of gaining insight into the interpretability of convolutional neural networks used for image classification.
arXiv Detail & Related papers (2022-12-01T02:05:44Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - E(n) Equivariant Graph Neural Networks [86.75170631724548]
This paper introduces a new model to learn graph neural networks equivariant to rotations, translations, reflections and permutations called E(n)-Equivariant Graph Neural Networks (EGNNs)
In contrast with existing methods, our work does not require computationally expensive higher-order representations in intermediate layers while it still achieves competitive or better performance.
arXiv Detail & Related papers (2021-02-19T10:25:33Z) - The geometry of integration in text classification RNNs [20.76659136484842]
We study recurrent networks trained on a battery of both natural and synthetic text classification tasks.
We find the dynamics of these trained RNNs to be both interpretable and low-dimensional.
Our observations span multiple architectures and datasets, reflecting a common mechanism RNNs employ to perform text classification.
arXiv Detail & Related papers (2020-10-28T17:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.