DeFungi: Direct Mycological Examination of Microscopic Fungi Images
- URL: http://arxiv.org/abs/2109.07322v1
- Date: Wed, 15 Sep 2021 14:25:28 GMT
- Title: DeFungi: Direct Mycological Examination of Microscopic Fungi Images
- Authors: Camilo Javier Pineda Sopo, Farshid Hajati, Soheila Gheisari
- Abstract summary: This paper presents experimental results classifying five fungi types using two different deep learning approaches and three different convolutional neural network models.
The best performing model trained from scratch was Inception V3, reporting 73.2% accuracy.
The dataset built is published in Kaggle and GitHub to foster future research.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditionally, diagnosis and treatment of fungal infections in humans depend
heavily on face-to-face consultations or examinations made by specialized
laboratory scientists known as mycologists. In many cases, such as the recent
mucormycosis spread in the COVID-19 pandemic, an initial treatment can be
safely suggested to the patient during the earliest stage of the mycological
diagnostic process by performing a direct examination of biopsies or samples
through a microscope. Computer-aided diagnosis systems using deep learning
models have been trained and used for the late mycological diagnostic stages.
However, there are no reference literature works made for the early stages. A
mycological laboratory in Colombia donated the images used for the development
of this research work. They were manually labelled into five classes and
curated with a subject matter expert assistance. The images were later cropped
and patched with automated code routines to produce the final dataset. This
paper presents experimental results classifying five fungi types using two
different deep learning approaches and three different convolutional neural
network models, VGG16, Inception V3, and ResNet50. The first approach
benchmarks the classification performance for the models trained from scratch,
while the second approach benchmarks the classification performance using
pre-trained models based on the ImageNet dataset. Using k-fold cross-validation
testing on the 5-class dataset, the best performing model trained from scratch
was Inception V3, reporting 73.2% accuracy. Also, the best performing model
using transfer learning was VGG16 reporting 85.04%. The statistics provided by
the two approaches create an initial point of reference to encourage future
research works to improve classification performance. Furthermore, the dataset
built is published in Kaggle and GitHub to foster future research.
Related papers
- Computational Pathology at Health System Scale -- Self-Supervised
Foundation Models from Three Billion Images [30.618749295623363]
This project aims to train the largest academic foundation model and benchmark the most prominent self-supervised learning algorithms by pre-training.
We collected the largest pathology dataset to date, consisting of over 3 billion images from over 423 thousand microscopy slides.
Our results demonstrate that pre-training on pathology data is beneficial for downstream performance compared to pre-training on natural images.
arXiv Detail & Related papers (2023-10-10T21:40:19Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - MedFMC: A Real-world Dataset and Benchmark For Foundation Model
Adaptation in Medical Image Classification [41.16626194300303]
Foundation models, often pre-trained with large-scale data, have achieved paramount success in jump-starting various vision and language applications.
Recent advances further enable adapting foundation models in downstream tasks efficiently using only a few training samples.
Yet, the application of such learning paradigms in medical image analysis remains scarce due to the shortage of publicly accessible data and benchmarks.
arXiv Detail & Related papers (2023-06-16T01:46:07Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Significantly improving zero-shot X-ray pathology classification via fine-tuning pre-trained image-text encoders [50.689585476660554]
We propose a new fine-tuning strategy that includes positive-pair loss relaxation and random sentence sampling.
Our approach consistently improves overall zero-shot pathology classification across four chest X-ray datasets and three pre-trained models.
arXiv Detail & Related papers (2022-12-14T06:04:18Z) - Application of Transfer Learning and Ensemble Learning in Image-level
Classification for Breast Histopathology [9.037868656840736]
In Computer-Aided Diagnosis (CAD), traditional classification models mostly use a single network to extract features.
This paper proposes a deep ensemble model based on image-level labels for the binary classification of benign and malignant lesions.
Result: In the ensemble network model with accuracy as the weight, the image-level binary classification achieves an accuracy of $98.90%$.
arXiv Detail & Related papers (2022-04-18T13:31:53Z) - Ensemble of CNN classifiers using Sugeno Fuzzy Integral Technique for
Cervical Cytology Image Classification [1.6986898305640261]
We propose a fully automated computer-aided diagnosis tool for classifying single-cell and slide images of cervical cancer.
We use the Sugeno Fuzzy Integral to ensemble the decision scores from three popular deep learning models, namely, Inception v3, DenseNet-161 and ResNet-34.
arXiv Detail & Related papers (2021-08-21T08:41:41Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - A Deep Learning Study on Osteosarcoma Detection from Histological Images [6.341765152919201]
The most common type of primary malignant bone tumor is osteosarcoma.
CNNs can significantly decrease surgeon's workload and make a better prognosis of patient conditions.
CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance.
arXiv Detail & Related papers (2020-11-02T18:16:17Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Retinopathy of Prematurity Stage Diagnosis Using Object Segmentation and
Convolutional Neural Networks [68.96150598294072]
Retinopathy of Prematurity (ROP) is an eye disorder primarily affecting premature infants with lower weights.
It causes proliferation of vessels in the retina and could result in vision loss and, eventually, retinal detachment, leading to blindness.
In recent years, there has been a significant effort to automate the diagnosis using deep learning.
This paper builds upon the success of previous models and develops a novel architecture, which combines object segmentation and convolutional neural networks (CNN)
Our proposed system first trains an object segmentation model to identify the demarcation line at a pixel level and adds the resulting mask as an additional "color" channel in
arXiv Detail & Related papers (2020-04-03T14:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.