Automating the Surveillance of Mosquito Vectors from Trapped Specimens
Using Computer Vision Techniques
- URL: http://arxiv.org/abs/2005.12188v2
- Date: Tue, 21 Jul 2020 19:58:45 GMT
- Title: Automating the Surveillance of Mosquito Vectors from Trapped Specimens
Using Computer Vision Techniques
- Authors: Mona Minakshi, Pratool Bharti, Willie B. McClinton III, Jamshidbek
Mirzakhalov, Ryan M. Carney, Sriram Chellappan
- Abstract summary: Aedes aegypti and Anopheles stephensi mosquitoes (both of which are deadly vectors) are studied.
CNN model based on Inception-ResNet V2 and Transfer Learning yielded an overall accuracy of 80% in classifying mosquitoes.
In particular, the accuracy of our model in classifying Aedes aegypti and Anopheles stephensi mosquitoes is amongst the highest.
- Score: 2.9822608774312327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Among all animals, mosquitoes are responsible for the most deaths worldwide.
Interestingly, not all types of mosquitoes spread diseases, but rather, a
select few alone are competent enough to do so. In the case of any disease
outbreak, an important first step is surveillance of vectors (i.e., those
mosquitoes capable of spreading diseases). To do this today, public health
workers lay several mosquito traps in the area of interest. Hundreds of
mosquitoes will get trapped. Naturally, among these hundreds, taxonomists have
to identify only the vectors to gauge their density. This process today is
manual, requires complex expertise/ training, and is based on visual inspection
of each trapped specimen under a microscope. It is long, stressful and
self-limiting. This paper presents an innovative solution to this problem. Our
technique assumes the presence of an embedded camera (similar to those in
smart-phones) that can take pictures of trapped mosquitoes. Our techniques
proposed here will then process these images to automatically classify the
genus and species type. Our CNN model based on Inception-ResNet V2 and Transfer
Learning yielded an overall accuracy of 80% in classifying mosquitoes when
trained on 25,867 images of 250 trapped mosquito vector specimens captured via
many smart-phone cameras. In particular, the accuracy of our model in
classifying Aedes aegypti and Anopheles stephensi mosquitoes (both of which are
deadly vectors) is amongst the highest. We present important lessons learned
and practical impact of our techniques towards the end of the paper.
Related papers
- Insect Identification in the Wild: The AMI Dataset [35.41544843896443]
Insects represent half of all global biodiversity, yet many of the world's insects are disappearing.
Despite this crisis, data on insect diversity and abundance remain woefully inadequate.
We provide the first large-scale machine learning benchmarks for fine-grained insect recognition.
arXiv Detail & Related papers (2024-06-18T09:57:02Z) - Automated detection of Zika and dengue in Aedes aegypti using neural
spiking analysis [8.034395623865906]
Aedes aegypti mosquitoes are primary vectors for numerous medically important viruses.
No open-source neural spike classification method is currently available for mosquitoes.
We present an innovative artificial intelligence-based method to classify the neural spikes in uninfected, dengue-infected, and Zika-infected mosquitoes.
arXiv Detail & Related papers (2023-12-14T04:52:54Z) - mAedesID: Android Application for Aedes Mosquito Species Identification
using Convolutional Neural Network [0.0]
It is important to control dengue disease by reducing the spread of Aedes mosquito vectors.
Community awareness plays acrucial role to ensure Aedes control programmes and encourages the communities to involve active participation.
Mobile application mAedesID is developed for identifying the Aedes mosquito species using a deep learning Convolutional Neural Network (CNN) algorithm.
arXiv Detail & Related papers (2023-05-02T14:20:13Z) - A Mosquito is Worth 16x16 Larvae: Evaluation of Deep Learning
Architectures for Mosquito Larvae Classification [0.04170934882758552]
This research introduces the application of the Vision Transformer (ViT) to improve image classification on Aedes and Culex larvae.
Two ViT models, ViT-Base and CvT-13, and two CNN models, ResNet-18 and ConvNeXT, were trained on mosquito larvae image data and compared to determine the most effective model to distinguish mosquito larvae as Aedes or Culex.
arXiv Detail & Related papers (2022-09-16T04:49:50Z) - A deep convolutional neural network for classification of Aedes
albopictus mosquitoes [1.6758573326215689]
We introduce the application of two Deep Convolutional Neural Networks in a comparative study to automate the classification task.
We use the transfer learning principle to train two state-of-the-art architectures on the data provided by the Mosquito Alert project.
In addition, we applied explainable models based on the Grad-CAM algorithm to visualise the most discriminant regions of the classified images.
arXiv Detail & Related papers (2021-10-29T17:58:32Z) - HumBugDB: A Large-scale Acoustic Mosquito Dataset [15.108701811353097]
This paper presents the first large-scale multi-species dataset of acoustic recordings of mosquitoes tracked continuously in free flight.
We present 20 hours of audio recordings that we have expertly labelled and tagged precisely in time.
18 hours of recordings contain annotations from 36 different species.
arXiv Detail & Related papers (2021-10-14T14:18:17Z) - One-Shot Learning with Triplet Loss for Vegetation Classification Tasks [45.82374977939355]
Triplet loss function is one of the options that can significantly improve the accuracy of the One-shot Learning tasks.
Starting from 2015, many projects use Siamese networks and this kind of loss for face recognition and object classification.
arXiv Detail & Related papers (2020-12-14T10:44:22Z) - Fooling the primate brain with minimal, targeted image manipulation [67.78919304747498]
We propose an array of methods for creating minimal, targeted image perturbations that lead to changes in both neuronal activity and perception as reflected in behavior.
Our work shares the same goal with adversarial attack, namely the manipulation of images with minimal, targeted noise that leads ANN models to misclassify the images.
arXiv Detail & Related papers (2020-11-11T08:30:54Z) - Cross-lingual Transfer Learning for COVID-19 Outbreak Alignment [90.12602012910465]
We train on Italy's early COVID-19 outbreak through Twitter and transfer to several other countries.
Our experiments show strong results with up to 0.85 Spearman correlation in cross-country predictions.
arXiv Detail & Related papers (2020-06-05T02:04:25Z) - Adversarial Fooling Beyond "Flipping the Label" [54.23547006072598]
CNNs show near human or better than human performance in many critical tasks.
These attacks are potentially dangerous in real-life deployments.
We present a comprehensive analysis of several important adversarial attacks over a set of distinct CNN architectures.
arXiv Detail & Related papers (2020-04-27T13:21:03Z) - MetaPoison: Practical General-purpose Clean-label Data Poisoning [58.13959698513719]
Data poisoning is an emerging threat in the context of neural networks.
We propose MetaPoison, a first-order method that approximates the bilevel problem via meta-learning and crafts poisons that fool neural networks.
We demonstrate for the first time successful data poisoning of models trained on the black-box Google Cloud AutoML API.
arXiv Detail & Related papers (2020-04-01T04:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.