SynthVision -- Harnessing Minimal Input for Maximal Output in Computer
Vision Models using Synthetic Image data
- URL: http://arxiv.org/abs/2402.02826v1
- Date: Mon, 5 Feb 2024 09:18:49 GMT
- Title: SynthVision -- Harnessing Minimal Input for Maximal Output in Computer
Vision Models using Synthetic Image data
- Authors: Yudara Kularathne, Prathapa Janitha, Sithira Ambepitiya, Thanveer
Ahamed, Dinuka Wijesundara, Prarththanan Sothyrajah
- Abstract summary: We build a comprehensive computer vision model for detecting Human Papilloma Virus Genital warts using only synthetic data.
The model achieved an F1 Score of 96% for HPV cases and 97% for normal cases.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Rapid development of disease detection computer vision models is vital in
response to urgent medical crises like epidemics or events of bioterrorism.
However, traditional data gathering methods are too slow for these scenarios
necessitating innovative approaches to generate reliable models quickly from
minimal data. We demonstrate our new approach by building a comprehensive
computer vision model for detecting Human Papilloma Virus Genital warts using
only synthetic data. In our study, we employed a two phase experimental design
using diffusion models. In the first phase diffusion models were utilized to
generate a large number of diverse synthetic images from 10 HPV guide images
explicitly focusing on accurately depicting genital warts. The second phase
involved the training and testing vision model using this synthetic dataset.
This method aimed to assess the effectiveness of diffusion models in rapidly
generating high quality training data and the subsequent impact on the vision
model performance in medical image recognition. The study findings revealed
significant insights into the performance of the vision model trained on
synthetic images generated through diffusion models. The vision model showed
exceptional performance in accurately identifying cases of genital warts. It
achieved an accuracy rate of 96% underscoring its effectiveness in medical
image classification. For HPV cases the model demonstrated a high precision of
99% and a recall of 94%. In normal cases the precision was 95% with an
impressive recall of 99%. These metrics indicate the model capability to
correctly identify true positive cases and minimize false positives. The model
achieved an F1 Score of 96% for HPV cases and 97% for normal cases. The high F1
Score across both categories highlights the balanced nature of the model
precision and recall ensuring reliability and robustness in its predictions.
Related papers
- Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Comparative Performance Analysis of Transformer-Based Pre-Trained Models for Detecting Keratoconus Disease [0.0]
This study compares eight pre-trained CNNs for diagnosing keratoconus, a degenerative eye disease.
MobileNetV2 was the best accurate model in identifying keratoconus and normal cases with few misclassifications.
arXiv Detail & Related papers (2024-08-16T20:15:24Z) - Mpox Detection Advanced: Rapid Epidemic Response Through Synthetic Data [0.0]
This study introduces a novel approach by constructing a comprehensive computer vision model to detect Mpox lesions using only synthetic data.
We trained and tested a vision model with this synthetic dataset to evaluate the diffusion models' efficacy in producing high-quality training data.
The results were promising; the vision model achieved a 97% accuracy rate, with 96% precision and recall for Mpox cases.
arXiv Detail & Related papers (2024-07-25T04:33:19Z) - Incorporating Improved Sinusoidal Threshold-based Semi-supervised Method
and Diffusion Models for Osteoporosis Diagnosis [0.43512163406552007]
Osteoporosis is a common skeletal disease that seriously affects patients' quality of life.
Traditional osteoporosis diagnosis methods are expensive and complex.
This paper can automatically diagnose osteoporosis based on patient's imaging data, which has the advantages of convenience, accuracy, and low cost.
arXiv Detail & Related papers (2024-03-11T08:11:46Z) - Symptom-based Machine Learning Models for the Early Detection of
COVID-19: A Narrative Review [0.0]
Machine learning models can analyze large datasets, incorporating patient-reported symptoms, clinical data, and medical imaging.
In this paper, we provide an overview of the landscape of symptoms-only machine learning models for predicting COVID-19, including their performance and limitations.
The review will also examine the performance of symptom-based models when compared to image-based models.
arXiv Detail & Related papers (2023-12-08T01:41:42Z) - Towards a Transportable Causal Network Model Based on Observational
Healthcare Data [1.333879175460266]
We propose a novel approach that combines selection diagrams, missingness graphs, causal discovery and prior knowledge into a single graphical model.
We learn this model from data comprising two different cohorts of patients.
The resulting causal network model is validated by expert clinicians in terms of risk assessment, accuracy and explainability.
arXiv Detail & Related papers (2023-11-13T13:23:31Z) - Performance or Trust? Why Not Both. Deep AUC Maximization with
Self-Supervised Learning for COVID-19 Chest X-ray Classifications [72.52228843498193]
In training deep learning models, a compromise often must be made between performance and trust.
In this work, we integrate a new surrogate loss with self-supervised learning for computer-aided screening of COVID-19 patients.
arXiv Detail & Related papers (2021-12-14T21:16:52Z) - On the explainability of hospitalization prediction on a large COVID-19
patient dataset [45.82374977939355]
We develop various AI models to predict hospitalization on a large (over 110$k$) cohort of COVID-19 positive-tested US patients.
Despite high data unbalance, the models reach average precision 0.96-0.98 (0.75-0.85), recall 0.96-0.98 (0.74-0.85), and $F_score 0.97-0.98 (0.79-0.83) on the non-hospitalized (or hospitalized) class.
arXiv Detail & Related papers (2021-10-28T10:23:38Z) - BERTHop: An Effective Vision-and-Language Model for Chest X-ray Disease
Diagnosis [42.917164607812886]
Vision-and-language(V&L) models take image and text as input and learn to capture the associations between them.
BERTHop is a transformer-based model based on PixelHop++ and VisualBERT, for better capturing the associations between the two modalities.
arXiv Detail & Related papers (2021-08-10T21:51:25Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.