Food Classification with Convolutional Neural Networks and Multi-Class
Linear Discernment Analysis
- URL: http://arxiv.org/abs/2012.03170v3
- Date: Sun, 10 Dec 2023 08:17:57 GMT
- Title: Food Classification with Convolutional Neural Networks and Multi-Class
Linear Discernment Analysis
- Authors: Joshua Ball
- Abstract summary: Linear discriminant analysis (LDA) can be implemented in a multi-class classification method to increase separability of class features.
CNN is superior to LDA for image classification and why LDA should not be left out of the races for image classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNNs) have been successful in representing the
fully-connected inferencing ability perceived to be seen in the human brain:
they take full advantage of the hierarchy-style patterns commonly seen in
complex data and develop more patterns using simple features. Countless
implementations of CNNs have shown how strong their ability is to learn these
complex patterns, particularly in the realm of image classification. However,
the cost of getting a high performance CNN to a so-called "state of the art"
level is computationally costly. Even when using transfer learning, which
utilize the very deep layers from models such as MobileNetV2, CNNs still take a
great amount of time and resources. Linear discriminant analysis (LDA), a
generalization of Fisher's linear discriminant, can be implemented in a
multi-class classification method to increase separability of class features
while not needing a high performance system to do so for image classification.
Similarly, we also believe LDA has great promise in performing well. In this
paper, we discuss our process of developing a robust CNN for food
classification as well as our effective implementation of multi-class LDA and
prove that (1) CNN is superior to LDA for image classification and (2) why LDA
should not be left out of the races for image classification, particularly for
binary cases.
Related papers
- Domain-decomposed image classification algorithms using linear discriminant analysis and convolutional neural networks [0.0]
Two different domain decomposed CNN models are experimentally compared for different image classification problems.
The resulting models show improved classification accuracies compared to the corresponding, composed global CNN model.
A novel decomposed LDA strategy is proposed which also relies on a localization approach and which is combined with a small neural network model.
arXiv Detail & Related papers (2024-10-30T18:07:12Z) - Model Parallel Training and Transfer Learning for Convolutional Neural Networks by Domain Decomposition [0.0]
Deep convolutional neural networks (CNNs) have been shown to be very successful in a wide range of image processing applications.
Due to their increasing number of model parameters and an increasing availability of large amounts of training data, parallelization strategies to efficiently train complex CNNs are necessary.
arXiv Detail & Related papers (2024-08-26T17:35:01Z) - A Gradient Boosting Approach for Training Convolutional and Deep Neural
Networks [0.0]
We introduce two procedures for training Convolutional Neural Networks (CNNs) and Deep Neural Network based on Gradient Boosting (GB)
The presented models show superior performance in terms of classification accuracy with respect to standard CNN and Deep-NN with the same architectures.
arXiv Detail & Related papers (2023-02-22T12:17:32Z) - A Domain Decomposition-Based CNN-DNN Architecture for Model Parallel Training Applied to Image Recognition Problems [0.0]
A novel CNN-DNN architecture is proposed that naturally supports a model parallel training strategy.
The proposed approach can significantly accelerate the required training time compared to the global model.
Results show that the proposed approach can also help to improve the accuracy of the underlying classification problem.
arXiv Detail & Related papers (2023-02-13T18:06:59Z) - Understanding CNN Fragility When Learning With Imbalanced Data [1.1444576186559485]
Convolutional neural networks (CNNs) have achieved impressive results on imbalanced image data, but they still have difficulty generalizing to minority classes.
We focus on their latent features to demystify CNN decisions on imbalanced data.
We show that important information regarding the ability of a neural network to generalize to minority classes resides in the class top-K CE and FE.
arXiv Detail & Related papers (2022-10-17T22:40:06Z) - Visual Recognition with Deep Nearest Centroids [57.35144702563746]
We devise deep nearest centroids (DNC), a conceptually elegant yet surprisingly effective network for large-scale visual recognition.
Compared with parametric counterparts, DNC performs better on image classification (CIFAR-10, ImageNet) and greatly boots pixel recognition (ADE20K, Cityscapes)
arXiv Detail & Related papers (2022-09-15T15:47:31Z) - Facilitated machine learning for image-based fruit quality assessment in
developing countries [68.8204255655161]
Automated image classification is a common task for supervised machine learning in food science.
We propose an alternative method based on pre-trained vision transformers (ViTs)
It can be easily implemented with limited resources on a standard device.
arXiv Detail & Related papers (2022-07-10T19:52:20Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.