NephroNet: A Novel Program for Identifying Renal Cell Carcinoma and
Generating Synthetic Training Images with Convolutional Neural Networks and
Diffusion Models
- URL: http://arxiv.org/abs/2302.05830v1
- Date: Sun, 12 Feb 2023 01:17:23 GMT
- Title: NephroNet: A Novel Program for Identifying Renal Cell Carcinoma and
Generating Synthetic Training Images with Convolutional Neural Networks and
Diffusion Models
- Authors: Yashvir Sabharwal
- Abstract summary: Renal cell carcinoma (RCC) is a type of cancer that originates in the kidneys and is the most common type of kidney cancer in adults.
In this study, an artificial intelligence model was developed and trained for classifying different subtypes of RCC using ResNet-18.
A novel synthetic image generation tool, NephroNet, is developed on diffusion models that are used to generate original images of RCC surgical resection slides.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Renal cell carcinoma (RCC) is a type of cancer that originates in the kidneys
and is the most common type of kidney cancer in adults. It can be classified
into several subtypes, including clear cell RCC, papillary RCC, and chromophobe
RCC. In this study, an artificial intelligence model was developed and trained
for classifying different subtypes of RCC using ResNet-18, a convolutional
neural network that has been widely used for image classification tasks. The
model was trained on a dataset of RCC histopathology images, which consisted of
digital images of RCC surgical resection slides that were annotated with the
corresponding subtype labels. The performance of the trained model was
evaluated using several metrics, including accuracy, precision, and recall.
Additionally, in this research, a novel synthetic image generation tool,
NephroNet, is developed on diffusion models that are used to generate original
images of RCC surgical resection slides. Diffusion models are a class of
generative models capable of synthesizing high-quality images from noise.
Several diffusers such as Stable Diffusion, Dreambooth Text-to-Image, and
Textual Inversion were trained on a dataset of RCC images and were used to
generate a series of original images that resembled RCC surgical resection
slides, all within the span of fewer than four seconds. The generated images
were visually realistic and could be used for creating new training datasets,
testing the performance of image analysis algorithms, and training medical
professionals. NephroNet is provided as an open-source software package and
contains files for data preprocessing, training, and visualization. Overall,
this study demonstrates the potential of artificial intelligence and diffusion
models for classifying and generating RCC images, respectively. These methods
could be useful for improving the diagnosis and treatment of RCC and more.
Related papers
- SurgicaL-CD: Generating Surgical Images via Unpaired Image Translation with Latent Consistency Diffusion Models [1.6189876649941652]
We introduce emphSurgicaL-CD, a consistency-distilled diffusion method to generate realistic surgical images.
Our results demonstrate that our method outperforms GANs and diffusion-based approaches.
arXiv Detail & Related papers (2024-08-19T09:19:25Z) - Learning Low-Rank Feature for Thorax Disease Classification [7.447448767095787]
We study thorax disease classification in this paper.
Effective extraction of features for the disease areas is crucial for disease classification on radiographic images.
We propose a novel Low-Rank Feature Learning (LRFL) method in this paper.
arXiv Detail & Related papers (2024-02-14T15:35:56Z) - Improving Classification of Retinal Fundus Image Using Flow Dynamics
Optimized Deep Learning Methods [0.0]
Diabetic Retinopathy (DR) refers to a barrier that takes place in diabetes mellitus damaging the blood vessel network present in the retina.
It can take some time to perform a DR diagnosis using color fundus pictures because experienced clinicians are required to identify the tumors in the imagery used to identify the illness.
arXiv Detail & Related papers (2023-04-29T16:11:34Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - RoentGen: Vision-Language Foundation Model for Chest X-ray Generation [7.618389245539657]
We develop a strategy to overcome the large natural-medical distributional shift by adapting a pre-trained latent diffusion model on a corpus of publicly available chest x-rays.
We investigate the model's ability to generate high-fidelity, diverse synthetic CXR conditioned on text prompts.
We present evidence that the resulting model (RoentGen) is able to create visually convincing, diverse synthetic CXR images.
arXiv Detail & Related papers (2022-11-23T06:58:09Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.