Logarithmic Morphological Neural Nets robust to lighting variations
- URL: http://arxiv.org/abs/2204.09319v1
- Date: Wed, 20 Apr 2022 08:54:49 GMT
- Title: Logarithmic Morphological Neural Nets robust to lighting variations
- Authors: Guillaume Noyel (LHC), Emile Barbier--Renard (LHC), Michel Jourlin
(LHC), Thierry Fournel (LHC)
- Abstract summary: We introduce a morphological neural network which possesses such a robustness to lighting variations.
It is based on the recent framework of Logarithmic Mathematical Morphology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Morphological neural networks allow to learn the weights of a structuring
function knowing the desired output image. However, those networks are not
intrinsically robust to lighting variations in images with an optical cause,
such as a change of light intensity. In this paper, we introduce a
morphological neural network which possesses such a robustness to lighting
variations. It is based on the recent framework of Logarithmic Mathematical
Morphology (LMM), i.e. Mathematical Morphology defined with the Logarithmic
Image Processing (LIP) model. This model has a LIP additive law which simulates
in images a variation of the light intensity. We especially learn the
structuring function of a LMM operator robust to those variations, namely : the
map of LIP-additive Asplund distances. Results in images show that our neural
network verifies the required property.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - An Algorithm to Train Unrestricted Sequential Discrete Morphological
Neural Networks [0.0]
We propose an algorithm to learn unrestricted sequential DMNN, whose architecture is given by the composition of general W-operators.
We illustrate the algorithm in a practical example.
arXiv Detail & Related papers (2023-10-06T20:55:05Z) - Logarithmic Mathematical Morphology: theory and applications [0.0]
In Mathematical Morphology for grey level functions, the structuring function is summed to the image with the usual additive law.
A new framework is defined with an additive law for which the amplitude of the structuring function varies according to the image amplitude.
The new framework is named Logarithmic Mathematical Morphology (LMM) and allows the definition of operators which are robust to such lighting variations.
arXiv Detail & Related papers (2023-09-05T07:45:35Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Optical Neural Ordinary Differential Equations [44.97261923694945]
We propose the optical neural ordinary differential equations (ON-ODE) architecture that parameterizes the continuous dynamics of hidden layers with optical ODE solvers.
The ON-ODE comprises the PNNs followed by the photonic integrator and optical feedback loop, which can be configured to represent residual neural networks (ResNet) and recurrent neural networks with effectively reduced chip area occupancy.
arXiv Detail & Related papers (2022-09-26T04:04:02Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Modeling the Nonsmoothness of Modern Neural Networks [35.93486244163653]
We quantify the nonsmoothness using a feature named the sum of the magnitude of peaks (SMP)
We envision that the nonsmoothness feature can potentially be used as a forensic tool for regression-based applications of neural networks.
arXiv Detail & Related papers (2021-03-26T20:55:19Z) - ResNet-LDDMM: Advancing the LDDMM Framework Using Deep Residual Networks [86.37110868126548]
In this work, we make use of deep residual neural networks to solve the non-stationary ODE (flow equation) based on a Euler's discretization scheme.
We illustrate these ideas on diverse registration problems of 3D shapes under complex topology-preserving transformations.
arXiv Detail & Related papers (2021-02-16T04:07:13Z) - Multi-Task Learning for Multi-Dimensional Regression: Application to
Luminescence Sensing [0.0]
A new approach to non-linear regression is to use neural networks, particularly feed-forward architectures with a sufficient number of hidden layers and an appropriate number of output neurons.
We propose multi-task learning (MTL) architectures. These are characterized by multiple branches of task-specific layers, which have as input the output of a common set of layers.
To demonstrate the power of this approach for multi-dimensional regression, the method is applied to luminescence sensing.
arXiv Detail & Related papers (2020-07-27T21:23:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.