Multilayer Multiset Neuronal Networks -- MMNNs
- URL: http://arxiv.org/abs/2308.14541v1
- Date: Mon, 28 Aug 2023 12:55:13 GMT
- Title: Multilayer Multiset Neuronal Networks -- MMNNs
- Authors: Alexandre Benatti, Luciano da Fontoura Costa
- Abstract summary: The present work describes multilayer multiset neuronal networks incorporating two or more layers of coincidence similarity neurons.
The work also explores the utilization of counter-prototype points, which are assigned to the image regions to be avoided.
- Score: 55.2480439325792
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The coincidence similarity index, based on a combination of the Jaccard and
overlap similarity indices, has noticeable properties in comparing and
classifying data, including enhanced selectivity and sensitivity, intrinsic
normalization, and robustness to data perturbations and outliers. These
features allow multiset neurons, which are based on the coincidence similarity
operation, to perform effective pattern recognition applications, including the
challenging task of image segmentation. A few prototype points have been used
in previous related approaches to represent each pattern to be identified, each
of them being associated with respective multiset neurons. The segmentation of
the regions can then proceed by taking into account the outputs of these
neurons. The present work describes multilayer multiset neuronal networks
incorporating two or more layers of coincidence similarity neurons. In
addition, as a means to improve performance, this work also explores the
utilization of counter-prototype points, which are assigned to the image
regions to be avoided. This approach is shown to allow effective segmentation
of complex regions despite considering only one prototype and one
counter-prototype point. As reported here, the balanced accuracy landscapes to
be optimized in order to identify the weight of the neurons in subsequent
layers have been found to be relatively smooth, while typically involving more
than one attraction basin. The use of a simple gradient-based optimization
methodology has been demonstrated to effectively train the considered neural
networks with several architectures, at least for the given data type,
configuration of parameters, and network architecture.
Related papers
- Heterogenous Memory Augmented Neural Networks [84.29338268789684]
We introduce a novel heterogeneous memory augmentation approach for neural networks.
By introducing learnable memory tokens with attention mechanism, we can effectively boost performance without huge computational overhead.
We show our approach on various image and graph-based tasks under both in-distribution (ID) and out-of-distribution (OOD) conditions.
arXiv Detail & Related papers (2023-10-17T01:05:28Z) - Memorization with neural nets: going beyond the worst case [5.662924503089369]
In practice, deep neural networks are often able to easily interpolate their training data.
For real-world data, however, one intuitively expects the presence of a benign structure so that already occurs at a smaller network size than suggested by memorization capacity.
We introduce a simple randomized algorithm that, given a fixed finite dataset with two classes, with high probability constructs an interpolating three-layer neural network in time.
arXiv Detail & Related papers (2023-09-30T10:06:05Z) - Two Approaches to Supervised Image Segmentation [55.616364225463066]
The present work develops comparison experiments between deep learning and multiset neurons approaches.
The deep learning approach confirmed its potential for performing image segmentation.
The alternative multiset methodology allowed for enhanced accuracy while requiring little computational resources.
arXiv Detail & Related papers (2023-07-19T16:42:52Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - Sparse Interaction Additive Networks via Feature Interaction Detection
and Sparse Selection [10.191597755296163]
We develop a tractable selection algorithm to efficiently identify the necessary feature combinations.
Our proposed Sparse Interaction Additive Networks (SIAN) construct a bridge from simple and interpretable models to fully connected neural networks.
arXiv Detail & Related papers (2022-09-19T19:57:17Z) - SRPN: similarity-based region proposal networks for nuclei and cells
detection in histology images [13.544784143012624]
We propose similarity based region proposal networks (SRPN) for nuclei and cells detection in histology images.
A customized convolution layer termed as embedding layer is designed for network building.
We test the proposed approach on tasks of multi-organ nuclei detection and signet ring cells detection in histological images.
arXiv Detail & Related papers (2021-06-25T10:56:54Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - VINNAS: Variational Inference-based Neural Network Architecture Search [2.685668802278155]
We present a differentiable variational inference-based NAS method for searching sparse convolutional neural networks.
Our method finds diverse network cells, while showing state-of-the-art accuracy with up to almost 2 times fewer non-zero parameters.
arXiv Detail & Related papers (2020-07-12T21:47:35Z) - Similarity of Neural Networks with Gradients [8.804507286438781]
We propose to leverage both feature vectors and gradient ones into designing the representation of a neural network.
We show that the proposed approach provides a state-of-the-art method for computing similarity of neural networks.
arXiv Detail & Related papers (2020-03-25T17:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.