Confusion-based rank similarity filters for computationally-efficient
machine learning on high dimensional data
- URL: http://arxiv.org/abs/2109.13610v1
- Date: Tue, 28 Sep 2021 10:53:38 GMT
- Title: Confusion-based rank similarity filters for computationally-efficient
machine learning on high dimensional data
- Authors: Katharine A. Shapcott and Alex D. Bird
- Abstract summary: We introduce a novel type of computationally efficient artificial neural network (ANN) called the rank similarity filter (RSF)
RSFs can be used to transform and classify nonlinearly separable datasets with many data points and dimensions.
Open-source code for RST, RSC and RSPC was written in Python using the popular scikit-learn framework to make it easily accessible.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce a novel type of computationally efficient artificial neural
network (ANN) called the rank similarity filter (RSF). RSFs can be used to both
transform and classify nonlinearly separable datasets with many data points and
dimensions. The weights of RSF are set using the rank orders of features in a
data point, or optionally the 'confusion' adjusted ranks between features
(determined from their distributions in the dataset). The activation strength
of a filter determines its similarity to other points in the dataset, a measure
related to cosine similarity. The activation of many RSFs maps samples into a
new nonlinear space suitable for linear classification (the rank similarity
transform (RST)). We additionally used this method to create the nonlinear rank
similarity classifier (RSC), which is a fast and accurate multiclass
classifier, and the nonlinear rank similarity probabilistic classifier (RSPC),
which is an extension to the multilabel case. We evaluated the classifiers on
multiple datasets and RSC was competitive with existing classifiers but with
superior computational efficiency. Open-source code for RST, RSC and RSPC was
written in Python using the popular scikit-learn framework to make it easily
accessible. In future extensions the algorithm can be applied to specialised
hardware suitable for the parallelization of an ANN (GPU) and a Spiking Neural
Network (neuromorphic computing) with corresponding performance gains. This
makes RSF a promising solution to the problem of efficient analysis of
nonlinearly separable data.
Related papers
- Efficient Similarity-based Passive Filter Pruning for Compressing CNNs [23.661189257759535]
Convolution neural networks (CNNs) have shown great success in various applications.
computational complexity and memory storage of CNNs is a bottleneck for their deployment on resource-constrained devices.
Recent efforts towards reducing the computation cost and the memory overhead of CNNs involve similarity-based passive filter pruning methods.
arXiv Detail & Related papers (2022-10-27T09:57:47Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Rubik's Cube Operator: A Plug And Play Permutation Module for Better
Arranging High Dimensional Industrial Data in Deep Convolutional Processes [6.467208324670583]
convolutional neural network (CNN) has been widely applied to process the industrial data based input.
Unlike images, information in the industrial data based system is not necessarily spatially ordered.
We propose a Rubik's Cube Operator (RCO) to adaptively permutate the data organization of the industrial data.
arXiv Detail & Related papers (2022-03-24T08:13:56Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Probability-driven scoring functions in combining linear classifiers [0.913755431537592]
This research is aimed at building a new fusion method dedicated to the ensemble of linear classifiers.
The proposed fusion method is compared with the reference method using multiple benchmark datasets taken from the KEEL repository.
The experimental study shows that, under certain conditions, some improvement may be obtained.
arXiv Detail & Related papers (2021-09-16T08:58:32Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Generalized Learning Vector Quantization for Classification in
Randomized Neural Networks and Hyperdimensional Computing [4.4886210896619945]
We propose a modified RVFL network that avoids computationally expensive matrix operations during training.
The proposed approach achieved state-of-the-art accuracy on a collection of datasets from the UCI Machine Learning Repository.
arXiv Detail & Related papers (2021-06-17T21:17:17Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Classification and Feature Transformation with Fuzzy Cognitive Maps [0.3299672391663526]
Fuzzy Cognitive Maps (FCMs) are considered a soft computing technique combining elements of fuzzy logic and recurrent neural networks.
In this work we propose an FCM based classifier with a fully connected map structure.
Weights were learned with a gradient algorithm and logloss or cross-entropy were used as the cost function.
arXiv Detail & Related papers (2021-03-08T22:26:24Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.