A Network-Based High-Level Data Classification Algorithm Using
Betweenness Centrality
- URL: http://arxiv.org/abs/2009.07971v1
- Date: Wed, 16 Sep 2020 23:14:13 GMT
- Title: A Network-Based High-Level Data Classification Algorithm Using
Betweenness Centrality
- Authors: Esteban Vilca, Liang Zhao
- Abstract summary: We propose a pure network-based high-level classification technique that uses the betweenness centrality measure.
We test this model in nine different real datasets and compare it with other nine traditional and well-known classification models.
- Score: 7.3810864598379755
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Data classification is a major machine learning paradigm, which has been
widely applied to solve a large number of real-world problems. Traditional data
classification techniques consider only physical features (e.g., distance,
similarity, or distribution) of the input data. For this reason, those are
called \textit{low-level} classification. On the other hand, the human (animal)
brain performs both low and high orders of learning and it has a facility in
identifying patterns according to the semantic meaning of the input data. Data
classification that considers not only physical attributes but also the pattern
formation is referred to as \textit{high-level} classification. Several
high-level classification techniques have been developed, which make use of
complex networks to characterize data patterns and have obtained promising
results. In this paper, we propose a pure network-based high-level
classification technique that uses the betweenness centrality measure. We test
this model in nine different real datasets and compare it with other nine
traditional and well-known classification models. The results show us a
competent classification performance.
Related papers
- Complex Networks for Pattern-Based Data Classification [1.0445957451908694]
We present two network-based classification techniques utilizing unique measures derived from the Minimum Spanning Tree and Single Source Shortest Path.
Compared to the existing classic high-level and machine-learning classification techniques, we have observed promising numerical results for our proposed approaches.
arXiv Detail & Related papers (2025-02-25T18:36:02Z) - The Art of Misclassification: Too Many Classes, Not Enough Points [0.46873264197900916]
We introduce a formal entropy-based measure of classificability, which quantifies the inherent difficulty of a classification problem.
This measure captures the degree of class overlap and aligns with human intuition, serving as an upper bound on classification performance.
arXiv Detail & Related papers (2025-02-12T00:57:53Z) - Hidden Classification Layers: Enhancing linear separability between
classes in neural networks layers [0.0]
We investigate the impact on deep network performances of a training approach.
We propose a neural network architecture which induces an error function involving the outputs of all the network layers.
arXiv Detail & Related papers (2023-06-09T10:52:49Z) - Generative Adversarial Classification Network with Application to
Network Traffic Classification [25.93711502488151]
We propose a joint data imputation and data classification method, termed generative adversarial classification network (GACN)
For the scenario where some data samples are unlabeled, we propose an extension termed semi-supervised GACN (SSGACN), which is able to use the partially labeled data to improve classification accuracy.
We conduct experiments with real-world network traffic data traces, which demonstrate that GACN and SS-GACN can more accurately impute data features that are more important for classification, and they outperform existing methods in terms of classification accuracy.
arXiv Detail & Related papers (2023-03-19T15:00:47Z) - Inspecting class hierarchies in classification-based metric learning
models [0.0]
We train a softmax classifier and three metric learning models with several training options on benchmark and real-world datasets.
We evaluate the hierarchical inference performance by inspecting learned class representatives and the hierarchy-informed performance, i.e., the classification performance, and the metric learning performance by considering predefined hierarchical structures.
arXiv Detail & Related papers (2023-01-26T12:40:12Z) - A semantic hierarchical graph neural network for text classification [1.439766998338892]
We propose a new hierarchical graph neural network (HieGNN) which extracts corresponding information from word-level, sentence-level and document-level respectively.
Experimental results on several benchmark datasets achieve better or similar results compared to several baseline methods.
arXiv Detail & Related papers (2022-09-15T03:59:31Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Theoretical Insights Into Multiclass Classification: A High-dimensional
Asymptotic View [82.80085730891126]
We provide the first modernally precise analysis of linear multiclass classification.
Our analysis reveals that the classification accuracy is highly distribution-dependent.
The insights gained may pave the way for a precise understanding of other classification algorithms.
arXiv Detail & Related papers (2020-11-16T05:17:29Z) - PK-GCN: Prior Knowledge Assisted Image Classification using Graph
Convolution Networks [3.4129083593356433]
Similarity between classes can influence the performance of classification.
We propose a method that incorporates class similarity knowledge into convolutional neural networks models.
Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.
arXiv Detail & Related papers (2020-09-24T18:31:35Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.