PK-GCN: Prior Knowledge Assisted Image Classification using Graph
Convolution Networks
- URL: http://arxiv.org/abs/2009.11892v1
- Date: Thu, 24 Sep 2020 18:31:35 GMT
- Title: PK-GCN: Prior Knowledge Assisted Image Classification using Graph
Convolution Networks
- Authors: Xueli Xiao, Chunyan Ji, Thosini Bamunu Mudiyanselage, Yi Pan
- Abstract summary: Similarity between classes can influence the performance of classification.
We propose a method that incorporates class similarity knowledge into convolutional neural networks models.
Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.
- Score: 3.4129083593356433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has gained great success in various classification tasks.
Typically, deep learning models learn underlying features directly from data,
and no underlying relationship between classes are included. Similarity between
classes can influence the performance of classification. In this article, we
propose a method that incorporates class similarity knowledge into
convolutional neural networks models using a graph convolution layer. We
evaluate our method on two benchmark image datasets: MNIST and CIFAR10, and
analyze the results on different data and model sizes. Experimental results
show that our model can improve classification accuracy, especially when the
amount of available data is small.
Related papers
- Subgraph Clustering and Atom Learning for Improved Image Classification [4.499833362998488]
We present the Graph Sub-Graph Network (GSN), a novel hybrid image classification model merging the strengths of Convolutional Neural Networks (CNNs) for feature extraction and Graph Neural Networks (GNNs) for structural modeling.
GSN employs k-means clustering to group graph nodes into clusters, facilitating the creation of subgraphs.
These subgraphs are then utilized to learn representative atoms for dictionary learning, enabling the identification of sparse, class-distinguishable features.
arXiv Detail & Related papers (2024-07-20T06:32:00Z) - Feature Activation Map: Visual Explanation of Deep Learning Models for
Image Classification [17.373054348176932]
In this work, a post-hoc interpretation tool named feature activation map (FAM) is proposed.
FAM can interpret deep learning models without FC layers as a classifier.
Experiments conducted on ten deep learning models for few-shot image classification, contrastive learning image classification and image retrieval tasks demonstrate the effectiveness of the proposed FAM algorithm.
arXiv Detail & Related papers (2023-07-11T05:33:46Z) - Provably Learning Diverse Features in Multi-View Data with Midpoint Mixup [14.37428912254029]
Mixup is a data augmentation technique that relies on training using random convex combinations of data points and their labels.
We focus on classification problems in which each class may have multiple associated features (or views) that can be used to predict the class correctly.
Our main theoretical results demonstrate that, for a non-trivial class of data distributions with two features per class, training a 2-layer convolutional network using empirical risk minimization can lead to learning only one feature for almost all classes while training with a specific instantiation of Mixup succeeds in learning both features for every class.
arXiv Detail & Related papers (2022-10-24T18:11:37Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Multi network InfoMax: A pre-training method involving graph
convolutional networks [0.0]
This paper presents a pre-training method involving graph convolutional/neural networks (GCNs/GNNs)
The learned high-level graph latent representations help increase performance for downstream graph classification tasks.
We apply our method to a neuroimaging dataset for classifying subjects into healthy control (HC) and schizophrenia (SZ) groups.
arXiv Detail & Related papers (2021-11-01T21:53:20Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - A Comparison of Deep Learning Classification Methods on Small-scale
Image Data set: from Converlutional Neural Networks to Visual Transformers [18.58928427116305]
This article explains the application and characteristics of convolutional neural networks and visual transformers.
A series of experiments are carried out on the small datasets by using various models.
The recommended deep learning model is given according to the model application environment.
arXiv Detail & Related papers (2021-07-16T04:13:10Z) - ECKPN: Explicit Class Knowledge Propagation Network for Transductive
Few-shot Learning [53.09923823663554]
Class-level knowledge can be easily learned by humans from just a handful of samples.
We propose an Explicit Class Knowledge Propagation Network (ECKPN) to address this problem.
We conduct extensive experiments on four few-shot classification benchmarks, and the experimental results show that the proposed ECKPN significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-06-16T02:29:43Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.