A Mosquito is Worth 16x16 Larvae: Evaluation of Deep Learning
Architectures for Mosquito Larvae Classification
- URL: http://arxiv.org/abs/2209.07718v1
- Date: Fri, 16 Sep 2022 04:49:50 GMT
- Title: A Mosquito is Worth 16x16 Larvae: Evaluation of Deep Learning
Architectures for Mosquito Larvae Classification
- Authors: Aswin Surya, David B. Peral, Austin VanLoon, Akhila Rajesh
- Abstract summary: This research introduces the application of the Vision Transformer (ViT) to improve image classification on Aedes and Culex larvae.
Two ViT models, ViT-Base and CvT-13, and two CNN models, ResNet-18 and ConvNeXT, were trained on mosquito larvae image data and compared to determine the most effective model to distinguish mosquito larvae as Aedes or Culex.
- Score: 0.04170934882758552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mosquito-borne diseases (MBDs), such as dengue virus, chikungunya virus, and
West Nile virus, cause over one million deaths globally every year. Because
many such diseases are spread by the Aedes and Culex mosquitoes, tracking these
larvae becomes critical in mitigating the spread of MBDs. Even as citizen
science grows and obtains larger mosquito image datasets, the manual annotation
of mosquito images becomes ever more time-consuming and inefficient. Previous
research has used computer vision to identify mosquito species, and the
Convolutional Neural Network (CNN) has become the de-facto for image
classification. However, these models typically require substantial
computational resources. This research introduces the application of the Vision
Transformer (ViT) in a comparative study to improve image classification on
Aedes and Culex larvae. Two ViT models, ViT-Base and CvT-13, and two CNN
models, ResNet-18 and ConvNeXT, were trained on mosquito larvae image data and
compared to determine the most effective model to distinguish mosquito larvae
as Aedes or Culex. Testing revealed that ConvNeXT obtained the greatest values
across all classification metrics, demonstrating its viability for mosquito
larvae classification. Based on these results, future research includes
creating a model specifically designed for mosquito larvae classification by
combining elements of CNN and transformer architecture.
Related papers
- Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining [85.08169822181685]
This paper introduces a novel Mamba-based model, Swin-UMamba, designed specifically for medical image segmentation tasks.
Swin-UMamba demonstrates superior performance with a large margin compared to CNNs, ViTs, and latest Mamba-based models.
arXiv Detail & Related papers (2024-02-05T18:58:11Z) - mAedesID: Android Application for Aedes Mosquito Species Identification
using Convolutional Neural Network [0.0]
It is important to control dengue disease by reducing the spread of Aedes mosquito vectors.
Community awareness plays acrucial role to ensure Aedes control programmes and encourages the communities to involve active participation.
Mobile application mAedesID is developed for identifying the Aedes mosquito species using a deep learning Convolutional Neural Network (CNN) algorithm.
arXiv Detail & Related papers (2023-05-02T14:20:13Z) - Autonomous Mosquito Habitat Detection Using Satellite Imagery and
Convolutional Neural Networks for Disease Risk Mapping [0.0]
Mosquito vectors are known for disease transmission that cause over one million deaths globally each year.
Modern approaches, such as drones, UAVs, and other aerial imaging technology are costly when implemented and are only most accurate on a finer spatial scale.
The proposed convolutional neural network(CNN) approach can be applied for disease risk mapping and further guide preventative efforts on a more global scale.
arXiv Detail & Related papers (2022-03-09T00:54:59Z) - A deep convolutional neural network for classification of Aedes
albopictus mosquitoes [1.6758573326215689]
We introduce the application of two Deep Convolutional Neural Networks in a comparative study to automate the classification task.
We use the transfer learning principle to train two state-of-the-art architectures on the data provided by the Mosquito Alert project.
In addition, we applied explainable models based on the Grad-CAM algorithm to visualise the most discriminant regions of the classified images.
arXiv Detail & Related papers (2021-10-29T17:58:32Z) - On the use of uncertainty in classifying Aedes Albopictus mosquitoes [1.6758573326215689]
Convolutional neural networks (CNNs) have been used by several studies to recognise mosquitoes in images.
This paper proposes using the Monte Carlo Dropout method to estimate the uncertainty scores in order to rank the classified samples.
arXiv Detail & Related papers (2021-10-29T16:58:25Z) - High performing ensemble of convolutional neural networks for insect
pest image detection [124.23179560022761]
Pest infestation is a major cause of crop damage and lost revenues worldwide.
We generate ensembles of CNNs based on different topologies.
Two new Adam algorithms for deep network optimization are proposed.
arXiv Detail & Related papers (2021-08-28T00:49:11Z) - Categorical Relation-Preserving Contrastive Knowledge Distillation for
Medical Image Classification [75.27973258196934]
We propose a novel Categorical Relation-preserving Contrastive Knowledge Distillation (CRCKD) algorithm, which takes the commonly used mean-teacher model as the supervisor.
With this regularization, the feature distribution of the student model shows higher intra-class similarity and inter-class variance.
With the contribution of the CCD and CRP, our CRCKD algorithm can distill the relational knowledge more comprehensively.
arXiv Detail & Related papers (2021-07-07T13:56:38Z) - One-Shot Learning with Triplet Loss for Vegetation Classification Tasks [45.82374977939355]
Triplet loss function is one of the options that can significantly improve the accuracy of the One-shot Learning tasks.
Starting from 2015, many projects use Siamese networks and this kind of loss for face recognition and object classification.
arXiv Detail & Related papers (2020-12-14T10:44:22Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Automating the Surveillance of Mosquito Vectors from Trapped Specimens
Using Computer Vision Techniques [2.9822608774312327]
Aedes aegypti and Anopheles stephensi mosquitoes (both of which are deadly vectors) are studied.
CNN model based on Inception-ResNet V2 and Transfer Learning yielded an overall accuracy of 80% in classifying mosquitoes.
In particular, the accuracy of our model in classifying Aedes aegypti and Anopheles stephensi mosquitoes is amongst the highest.
arXiv Detail & Related papers (2020-05-25T15:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.