Reduced Dilation-Erosion Perceptron for Binary Classification
- URL: http://arxiv.org/abs/2003.02306v2
- Date: Tue, 14 Apr 2020 18:07:40 GMT
- Title: Reduced Dilation-Erosion Perceptron for Binary Classification
- Authors: Marcos Eduardo Valle
- Abstract summary: Dilation-erosion perceptron (DEP) is a neural network obtained by a convex combination of a dilation and an erosion.
This paper introduces the reduced dilation-erosion (r-DEP) classifier.
- Score: 1.3706331473063877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dilation and erosion are two elementary operations from mathematical
morphology, a non-linear lattice computing methodology widely used for image
processing and analysis. The dilation-erosion perceptron (DEP) is a
morphological neural network obtained by a convex combination of a dilation and
an erosion followed by the application of a hard-limiter function for binary
classification tasks. A DEP classifier can be trained using a convex-concave
procedure along with the minimization of the hinge loss function. As a lattice
computing model, the DEP classifier assumes the feature and class spaces are
partially ordered sets. In many practical situations, however, there is no
natural ordering for the feature patterns. Using concepts from multi-valued
mathematical morphology, this paper introduces the reduced dilation-erosion
(r-DEP) classifier. An r-DEP classifier is obtained by endowing the feature
space with an appropriate reduced ordering. Such reduced ordering can be
determined using two approaches: One based on an ensemble of support vector
classifiers (SVCs) with different kernels and the other based on a bagging of
similar SVCs trained using different samples of the training set. Using several
binary classification datasets from the OpenML repository, the ensemble and
bagging r-DEP classifiers yielded in mean higher balanced accuracy scores than
the linear, polynomial, and radial basis function (RBF) SVCs as well as their
ensemble and a bagging of RBF SVCs.
Related papers
- Sparse Tensor PCA via Tensor Decomposition for Unsupervised Feature Selection [8.391109286933856]
We develop two Sparse Principal Component Analysis (STPCA) models that utilize the projection directions in the factor matrices to perform unsupervised feature selection.
For both models, we prove the optimal solution of each subproblem falls onto the Hermitian Positive Semidefinite Cone (HPSD)
According to the experimental results, the two proposed methods are suitable for handling different data tensor scenarios and outperform the state-of-the-art UFS methods.
arXiv Detail & Related papers (2024-07-24T04:04:56Z) - Learning Compact Features via In-Training Representation Alignment [19.273120635948363]
In each epoch, the true gradient of the loss function is estimated using a mini-batch sampled from the training set.
We propose In-Training Representation Alignment (ITRA) that explicitly aligns feature distributions of two different mini-batches with a matching loss.
We also provide a rigorous analysis of the desirable effects of the matching loss on feature representation learning.
arXiv Detail & Related papers (2022-11-23T22:23:22Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Spatial-spectral Hyperspectral Image Classification via Multiple Random
Anchor Graphs Ensemble Learning [88.60285937702304]
This paper proposes a novel spatial-spectral HSI classification method via multiple random anchor graphs ensemble learning (RAGE)
Firstly, the local binary pattern is adopted to extract the more descriptive features on each selected band, which preserves local structures and subtle changes of a region.
Secondly, the adaptive neighbors assignment is introduced in the construction of anchor graph, to reduce the computational complexity.
arXiv Detail & Related papers (2021-03-25T09:31:41Z) - Linear Dilation-Erosion Perceptron Trained Using a Convex-Concave
Procedure [1.3706331473063877]
We present the textitlinear dilation-erosion perceptron ($ell$-DEP), which is given by applying linear transformations before computing a dilation and an erosion.
We compare the performance of the $ell$-DEP model with other machine learning techniques using several classification problems.
arXiv Detail & Related papers (2020-11-11T18:37:07Z) - Binary Classification as a Phase Separation Process [0.0]
We propose a new binary classification model called Phase Separation Binary (PSBC)
It consists of a discretization of a nonlinear reaction-diffusion equation coupled with an Ordinary Differential Equation.
PSBC's equations can be seen as a dynamical system whose coefficients are trainable weights, with a similar architecture to that of a Recurrent Neural Network.
arXiv Detail & Related papers (2020-09-05T05:47:05Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Differentiable Segmentation of Sequences [2.1485350418225244]
We build on advances in learning continuous warping functions and propose a novel family of warping functions based on the two-sided power (TSP) distribution.
Our formulation includes the important class of segmented generalized linear models as a special case.
We use our approach to model the spread of COVID-19 with Poisson regression, apply it on a change point detection task, and learn classification models with concept drift.
arXiv Detail & Related papers (2020-06-23T15:51:48Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.