Logarithmic Morphological Neural Nets robust to lighting variations
- URL: http://arxiv.org/abs/2204.09319v1
- Date: Wed, 20 Apr 2022 08:54:49 GMT
- Title: Logarithmic Morphological Neural Nets robust to lighting variations
- Authors: Guillaume Noyel (LHC), Emile Barbier--Renard (LHC), Michel Jourlin
(LHC), Thierry Fournel (LHC)
- Abstract summary: We introduce a morphological neural network which possesses such a robustness to lighting variations.
It is based on the recent framework of Logarithmic Mathematical Morphology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Morphological neural networks allow to learn the weights of a structuring
function knowing the desired output image. However, those networks are not
intrinsically robust to lighting variations in images with an optical cause,
such as a change of light intensity. In this paper, we introduce a
morphological neural network which possesses such a robustness to lighting
variations. It is based on the recent framework of Logarithmic Mathematical
Morphology (LMM), i.e. Mathematical Morphology defined with the Logarithmic
Image Processing (LIP) model. This model has a LIP additive law which simulates
in images a variation of the light intensity. We especially learn the
structuring function of a LMM operator robust to those variations, namely : the
map of LIP-additive Asplund distances. Results in images show that our neural
network verifies the required property.
Related papers
- Training Hybrid Neural Networks with Multimode Optical Nonlinearities Using Digital Twins [2.8479179029634984]
We introduce ultrashort pulse propagation in multimode fibers, which perform large-scale nonlinear transformations.
Training the hybrid architecture is achieved through a neural model that differentiably approximates the optical system.
Our experimental results achieve state-of-the-art image classification accuracies and simulation fidelity.
arXiv Detail & Related papers (2025-01-14T10:35:18Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - An Algorithm to Train Unrestricted Sequential Discrete Morphological
Neural Networks [0.0]
We propose an algorithm to learn unrestricted sequential DMNN, whose architecture is given by the composition of general W-operators.
We illustrate the algorithm in a practical example.
arXiv Detail & Related papers (2023-10-06T20:55:05Z) - Logarithmic Mathematical Morphology: theory and applications [0.0]
In Mathematical Morphology for grey level functions, the structuring function is summed to the image with the usual additive law.
A new framework is defined with an additive law for which the amplitude of the structuring function varies according to the image amplitude.
The new framework is named Logarithmic Mathematical Morphology (LMM) and allows the definition of operators which are robust to such lighting variations.
arXiv Detail & Related papers (2023-09-05T07:45:35Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Optical Neural Ordinary Differential Equations [44.97261923694945]
We propose the optical neural ordinary differential equations (ON-ODE) architecture that parameterizes the continuous dynamics of hidden layers with optical ODE solvers.
The ON-ODE comprises the PNNs followed by the photonic integrator and optical feedback loop, which can be configured to represent residual neural networks (ResNet) and recurrent neural networks with effectively reduced chip area occupancy.
arXiv Detail & Related papers (2022-09-26T04:04:02Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Modeling the Nonsmoothness of Modern Neural Networks [35.93486244163653]
We quantify the nonsmoothness using a feature named the sum of the magnitude of peaks (SMP)
We envision that the nonsmoothness feature can potentially be used as a forensic tool for regression-based applications of neural networks.
arXiv Detail & Related papers (2021-03-26T20:55:19Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.