Data Augmentation and Clustering for Vehicle Make/Model Classification
- URL: http://arxiv.org/abs/2009.06679v1
- Date: Mon, 14 Sep 2020 18:24:31 GMT
- Title: Data Augmentation and Clustering for Vehicle Make/Model Classification
- Authors: Mohamed Nafzi, Michael Brauckmann, Tobias Glasmachers
- Abstract summary: We present a way to exploit a training data set of vehicles released in different years and captured under different perspectives.
Also the efficacy of clustering to enhance the make/model classification is presented.
Deeper convolutional neural network based on ResNet architecture has been designed for the training of the vehicle make/model classification.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Vehicle shape information is very important in Intelligent Traffic Systems
(ITS). In this paper we present a way to exploit a training data set of
vehicles released in different years and captured under different perspectives.
Also the efficacy of clustering to enhance the make/model classification is
presented. Both steps led to improved classification results and a greater
robustness. Deeper convolutional neural network based on ResNet architecture
has been designed for the training of the vehicle make/model classification.
The unequal class distribution of training data produces an a priori
probability. Its elimination, obtained by removing of the bias and through hard
normalization of the centroids in the classification layer, improves the
classification results. A developed application has been used to test the
vehicle re-identification on video data manually based on make/model and color
classification. This work was partially funded under the grant.
Related papers
- Latent Enhancing AutoEncoder for Occluded Image Classification [2.6217304977339473]
We introduce LEARN: Latent Enhancing feAture Reconstruction Network.
An auto-encoder based network that can be incorporated into the classification model before its head.
On the OccludedPASCAL3D+ dataset, the proposed LEARN outperforms standard classification models.
arXiv Detail & Related papers (2024-02-10T12:22:31Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Vehicle Behavior Prediction and Generalization Using Imbalanced Learning
Techniques [1.3381749415517017]
This paper proposes an interaction-aware prediction model consisting of an LSTM autoencoder and SVM classifier.
Evaluations show that the method enhances model performance, resulting in improved classification accuracy.
arXiv Detail & Related papers (2021-09-22T11:21:20Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - PK-GCN: Prior Knowledge Assisted Image Classification using Graph
Convolution Networks [3.4129083593356433]
Similarity between classes can influence the performance of classification.
We propose a method that incorporates class similarity knowledge into convolutional neural networks models.
Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.
arXiv Detail & Related papers (2020-09-24T18:31:35Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.