Robust Pollen Imagery Classification with Generative Modeling and Mixup
Training
- URL: http://arxiv.org/abs/2102.13143v1
- Date: Thu, 25 Feb 2021 19:39:24 GMT
- Title: Robust Pollen Imagery Classification with Generative Modeling and Mixup
Training
- Authors: Jaideep Murkute
- Abstract summary: We present a robust deep learning framework that can generalize well for pollen grain aerobiological imagery classification.
We develop a convolutional neural network-based pollen grain classification approach and combine some of the best practices in deep learning for better generalization.
The proposed approach earned a fourth-place in the final rankings in the ICPR-2020 Pollen Grain Classification Challenge.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning approaches have shown great success in image classification
tasks and can aid greatly towards the fast and reliable classification of
pollen grain aerial imagery. However, often-times deep learning methods in the
setting of natural images can suffer generalization problems and yield poor
performance on unseen test distribution. In this work, we present and a robust
deep learning framework that can generalize well for pollen grain
aerobiological imagery classification. We develop a convolutional neural
network-based pollen grain classification approach and combine some of the best
practices in deep learning for better generalization. In addition to
commonplace approaches like data-augmentation and weight regularization, we
utilize implicit regularization methods like manifold mixup to allow learning
of smoother decision boundaries. We also make use of proven state-of-the-art
architectural choices like EfficientNet convolutional neural networks. Inspired
by the success of generative modeling with variational autoencoders, we train
models with a richer learning objective which can allow the model to focus on
the relevant parts of the image. Finally, we create an ensemble of neural
networks, for the robustness of the test set predictions. Based on our
experiments, we show improved generalization performance as measured with a
weighted F1-score with the aforementioned approaches. The proposed approach
earned a fourth-place in the final rankings in the ICPR-2020 Pollen Grain
Classification Challenge; with a 0.972578 weighted F1 score,0.950828 macro
average F1 scores, and 0.972877 recognition accuracy.
Related papers
- FaFCNN: A General Disease Classification Framework Based on Feature
Fusion Neural Networks [4.097623533226476]
We propose the Feature-aware Fusion Correlation Neural Network (FaFCNN), which introduces a feature-aware interaction module and a feature alignment module based on domain adversarial learning.
The experimental results show that training using augmented features obtained by pre-training gradient boosting decision tree yields more performance gains than random-forest based methods.
arXiv Detail & Related papers (2023-07-24T04:23:08Z) - Forward-Forward Contrastive Learning [4.465144120325802]
We propose Forward Forward Contrastive Learning (FFCL) as a novel pretraining approach for medical image classification.
FFCL achieves superior performance (3.69% accuracy over ImageNet pretrained ResNet-18) over existing pretraining models in the pneumonia classification task.
arXiv Detail & Related papers (2023-05-04T15:29:06Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Leveraging Angular Information Between Feature and Classifier for
Long-tailed Learning: A Prediction Reformulation Approach [90.77858044524544]
We reformulate the recognition probabilities through included angles without re-balancing the classifier weights.
Inspired by the performance improvement of the predictive form reformulation, we explore the different properties of this angular prediction.
Our method is able to obtain the best performance among peer methods without pretraining on CIFAR10/100-LT and ImageNet-LT.
arXiv Detail & Related papers (2022-12-03T07:52:48Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Application of Transfer Learning and Ensemble Learning in Image-level
Classification for Breast Histopathology [9.037868656840736]
In Computer-Aided Diagnosis (CAD), traditional classification models mostly use a single network to extract features.
This paper proposes a deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions.
Result: In the ensemble network model with accuracy as the weight, the image-level binary classification achieves an accuracy of $98.90%$.
arXiv Detail & Related papers (2022-04-18T13:31:53Z) - Efficient and Robust Classification for Sparse Attacks [34.48667992227529]
We consider perturbations bounded by the $ell$--norm, which have been shown as effective attacks in the domains of image-recognition, natural language processing, and malware-detection.
We propose a novel defense method that consists of "truncation" and "adrial training"
Motivated by the insights we obtain, we extend these components to neural network classifiers.
arXiv Detail & Related papers (2022-01-23T21:18:17Z) - Pollen Grain Microscopic Image Classification Using an Ensemble of
Fine-Tuned Deep Convolutional Neural Networks [2.824133171517646]
We present an ensemble approach for pollen grain microscopic image classification into four categories.
We develop a classification strategy that is based on fusion of four state-of-the-art fine-tuned convolutional neural networks.
We obtain an accuracy of 94.48% and a weighted F1-score of 94.54% on the ICPR 2020 Pollen Grain Classification Challenge training dataset.
arXiv Detail & Related papers (2020-11-15T01:25:46Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.