Localization and Classification of Parasitic Eggs in Microscopic Images
Using an EfficientDet Detector
- URL: http://arxiv.org/abs/2208.01963v1
- Date: Wed, 3 Aug 2022 10:28:18 GMT
- Title: Localization and Classification of Parasitic Eggs in Microscopic Images
Using an EfficientDet Detector
- Authors: Nouar AlDahoul (1), Hezerul Abdul Karim (1), Shaira Limson Kee (2),
Myles Joshua Toledo Tan (2 and 3) ((1) Faculty of Engineering, Multimedia
University, Cyberjaya, Malaysia, (2) Department of Natural Sciences,
University of St. La Salle, Bacolod City, Philippines, (3) Department of
Chemical Engineering, University of St. La Salle, Bacolod City, Philippines)
- Abstract summary: We propose a multi-modal learning detector to localize parasitic eggs and categorize them into 11 categories.
Our results show robust performance with an accuracy of 92%, and an F1 score of 93%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: IPIs caused by protozoan and helminth parasites are among the most common
infections in humans in LMICs. They are regarded as a severe public health
concern, as they cause a wide array of potentially detrimental health
conditions. Researchers have been developing pattern recognition techniques for
the automatic identification of parasite eggs in microscopic images. Existing
solutions still need improvements to reduce diagnostic errors and generate
fast, efficient, and accurate results. Our paper addresses this and proposes a
multi-modal learning detector to localize parasitic eggs and categorize them
into 11 categories. The experiments were conducted on the novel
Chula-ParasiteEgg-11 dataset that was used to train both EfficientDet model
with EfficientNet-v2 backbone and EfficientNet-B7+SVM. The dataset has 11,000
microscopic training images from 11 categories. Our results show robust
performance with an accuracy of 92%, and an F1 score of 93%. Additionally, the
IOU distribution illustrates the high localization capability of the detector.
Related papers
- Active Prompt Tuning Enables Gpt-40 To Do Efficient Classification Of Microscopy Images [0.0]
Traditional deep learning-based methods for classifying cellular features in microscopy images require time- and labor-intensive processes for training models.
We previously proposed a solution that overcomes these challenges using OpenAI's GPT-4(V) model on a pilot dataset.
Results on the pilot dataset were equivalent in accuracy and with a substantial improvement in throughput efficiency compared to the baseline.
arXiv Detail & Related papers (2024-11-04T21:56:48Z) - Towards Robust Plant Disease Diagnosis with Hard-sample Re-mining
Strategy [6.844857856353672]
We propose a simple but effective training strategy called hard-sample re-mining (HSReM)
HSReM is designed to enhance the diagnostic performance of healthy data and simultaneously improve the performance of disease data.
Experiments show that our HSReM training strategy leads to a substantial improvement in the overall diagnostic performance on large-scale unseen data.
arXiv Detail & Related papers (2023-09-05T02:26:42Z) - Brain Imaging-to-Graph Generation using Adversarial Hierarchical Diffusion Models for MCI Causality Analysis [44.45598796591008]
Brain imaging-to-graph generation (BIGG) framework is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment analysis.
The hierarchical transformers in the generator are designed to estimate the noise at multiple scales.
Evaluations of the ADNI dataset demonstrate the feasibility and efficacy of the proposed model.
arXiv Detail & Related papers (2023-05-18T06:54:56Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Detection of Parasitic Eggs from Microscopy Images and the emergence of
a new dataset [8.957918272018045]
Automatic detection of parasitic eggs in microscopy images has the potential to increase the efficiency of human experts.
We exploit successful architectures for detection, adapting them to tackle a different domain.
We demonstrate results produced by both a Generative Adversarial Network (GAN) and Faster-RCNN, for image enhancement and object detection.
arXiv Detail & Related papers (2022-03-06T11:44:35Z) - Performance, Successes and Limitations of Deep Learning Semantic
Segmentation of Multiple Defects in Transmission Electron Micrographs [9.237363938772479]
We perform semantic segmentation of defect types in electron microscopy images of irradiated FeCrAl alloys using a deep learning Mask Regional Convolutional Neural Network (Mask R-CNN) model.
We conduct an in-depth analysis of key model performance statistics, with a focus on quantities such as predicted distributions of defect shapes, defect sizes, and defect areal densities.
Overall, we find that the current model is a fast, effective tool for automatically characterizing and quantifying multiple defect types in microscopy images.
arXiv Detail & Related papers (2021-10-15T17:57:59Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Parasitic Egg Detection and Classification in Low-cost Microscopic
Images using Transfer Learning [1.6050172226234583]
We propose a CNN-based technique using transfer learning strategy to enhance the efficiency of automatic parasite classification in poor-quality microscopic images.
Our proposed framework outperforms the state-of-the-art object recognition methods.
Our system combined with final decision from an expert may improve the real faecal examination with low-cost microscopes.
arXiv Detail & Related papers (2021-07-02T11:05:45Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Residual Attention U-Net for Automated Multi-Class Segmentation of
COVID-19 Chest CT Images [46.844349956057776]
coronavirus disease 2019 (COVID-19) has been spreading rapidly around the world and caused significant impact on the public health and economy.
There is still lack of studies on effectively quantifying the lung infection caused by COVID-19.
We propose a novel deep learning algorithm for automated segmentation of multiple COVID-19 infection regions.
arXiv Detail & Related papers (2020-04-12T16:24:59Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.