Linear Dilation-Erosion Perceptron Trained Using a Convex-Concave
Procedure
- URL: http://arxiv.org/abs/2011.06512v1
- Date: Wed, 11 Nov 2020 18:37:07 GMT
- Title: Linear Dilation-Erosion Perceptron Trained Using a Convex-Concave
Procedure
- Authors: Angelica Louren\c{c}o Oliveira and Marcos Eduardo Valle
- Abstract summary: We present the textitlinear dilation-erosion perceptron ($ell$-DEP), which is given by applying linear transformations before computing a dilation and an erosion.
We compare the performance of the $ell$-DEP model with other machine learning techniques using several classification problems.
- Score: 1.3706331473063877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mathematical morphology (MM) is a theory of non-linear operators used for the
processing and analysis of images. Morphological neural networks (MNNs) are
neural networks whose neurons compute morphological operators. Dilations and
erosions are the elementary operators of MM. From an algebraic point of view, a
dilation and an erosion are operators that commute respectively with the
supremum and infimum operations. In this paper, we present the \textit{linear
dilation-erosion perceptron} ($\ell$-DEP), which is given by applying linear
transformations before computing a dilation and an erosion. The decision
function of the $\ell$-DEP model is defined by adding a dilation and an
erosion. Furthermore, training a $\ell$-DEP can be formulated as a
convex-concave optimization problem. We compare the performance of the
$\ell$-DEP model with other machine learning techniques using several
classification problems. The computational experiments support the potential
application of the proposed $\ell$-DEP model for binary classification tasks.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - MgNO: Efficient Parameterization of Linear Operators via Multigrid [4.096453902709292]
We introduce MgNO, utilizing multigrid structures to parameterize linear operators between neurons.
MgNO exhibits superior ease of training compared to other CNN-based models.
arXiv Detail & Related papers (2023-10-16T13:01:35Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - A semigroup method for high dimensional elliptic PDEs and eigenvalue
problems based on neural networks [1.52292571922932]
We propose a semigroup computation method for solving high-dimensional elliptic partial differential equations (PDEs) and the associated eigenvalue problems based on neural networks.
For the PDE problems, we reformulate the original equations as variational problems with the help of semigroup operators and then solve the variational problems with neural network (NN) parameterization.
For eigenvalue problems, a primal-dual method is proposed, resolving the constraint with a scalar dual variable.
arXiv Detail & Related papers (2021-05-07T19:49:06Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Deep neural networks for inverse problems with pseudodifferential
operators: an application to limited-angle tomography [0.4110409960377149]
We propose a novel convolutional neural network (CNN) designed for learning pseudodifferential operators ($Psi$DOs) in the context of linear inverse problems.
We show that, under rather general assumptions on the forward operator, the unfolded iterations of ISTA can be interpreted as the successive layers of a CNN.
In particular, we prove that, in the case of LA-CT, the operations of upscaling, downscaling and convolution, can be exactly determined by combining the convolutional nature of the limited angle X-ray transform and basic properties defining a wavelet system.
arXiv Detail & Related papers (2020-06-02T14:03:41Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z) - Reduced Dilation-Erosion Perceptron for Binary Classification [1.3706331473063877]
Dilation-erosion perceptron (DEP) is a neural network obtained by a convex combination of a dilation and an erosion.
This paper introduces the reduced dilation-erosion (r-DEP) classifier.
arXiv Detail & Related papers (2020-03-04T19:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.