Feature Fusion of Raman Chemical Imaging and Digital Histopathology
using Machine Learning for Prostate Cancer Detection
- URL: http://arxiv.org/abs/2101.07342v1
- Date: Mon, 18 Jan 2021 22:11:42 GMT
- Title: Feature Fusion of Raman Chemical Imaging and Digital Histopathology
using Machine Learning for Prostate Cancer Detection
- Authors: Trevor Doherty, Susan McKeever, Nebras Al-Attar, Tiarnan Murphy,
Claudia Aura, Arman Rahman, Amanda O'Neill, Stephen P Finn, Elaine Kay,
William M. Gallagher, R. William G. Watson, Aoife Gowen and Patrick Jackman
- Abstract summary: This study uses multimodal images formed from stained Digital Histopathology (DP) and unstained Raman Chemical Imaging (RCI)
The hypothesis tested was whether multimodal image models can outperform single modality baseline models in terms of diagnostic accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The diagnosis of prostate cancer is challenging due to the heterogeneity of
its presentations, leading to the over diagnosis and treatment of
non-clinically important disease. Accurate diagnosis can directly benefit a
patient's quality of life and prognosis. Towards addressing this issue, we
present a learning model for the automatic identification of prostate cancer.
While many prostate cancer studies have adopted Raman spectroscopy approaches,
none have utilised the combination of Raman Chemical Imaging (RCI) and other
imaging modalities. This study uses multimodal images formed from stained
Digital Histopathology (DP) and unstained RCI. The approach was developed and
tested on a set of 178 clinical samples from 32 patients, containing a range of
non-cancerous, Gleason grade 3 (G3) and grade 4 (G4) tissue microarray samples.
For each histological sample, there is a pathologist labelled DP - RCI image
pair. The hypothesis tested was whether multimodal image models can outperform
single modality baseline models in terms of diagnostic accuracy. Binary
non-cancer/cancer models and the more challenging G3/G4 differentiation were
investigated. Regarding G3/G4 classification, the multimodal approach achieved
a sensitivity of 73.8% and specificity of 88.1% while the baseline DP model
showed a sensitivity and specificity of 54.1% and 84.7% respectively. The
multimodal approach demonstrated a statistically significant 12.7% AUC
advantage over the baseline with a value of 85.8% compared to 73.1%, also
outperforming models based solely on RCI and median Raman spectra. Feature
fusion of DP and RCI does not improve the more trivial task of tumour
identification but does deliver an observed advantage in G3/G4 discrimination.
Building on these promising findings, future work could include the acquisition
of larger datasets for enhanced model generalization.
Related papers
- Multimodal MRI-Ultrasound AI for Prostate Cancer Detection Outperforms Radiologist MRI Interpretation: A Multi-Center Study [2.493694664727448]
Pre-biopsy magnetic resonance imaging (MRI) is increasingly used to target suspicious prostate lesions.
MRI-detected lesions must still be mapped to transrectal ultrasound (TRUS) images during biopsy, which results in missing clinically significant prostate cancer (CsPCa)
This study systematically evaluates a multimodal AI framework integrating MRI and TRUS image sequences to enhance CsPCa identification.
arXiv Detail & Related papers (2025-01-31T20:04:20Z) - Cancer-Net PCa-Seg: Benchmarking Deep Learning Models for Prostate Cancer Segmentation Using Synthetic Correlated Diffusion Imaging [65.83291923029985]
Prostate cancer (PCa) is the most prevalent cancer among men in the United States, accounting for nearly 300,000 cases, 29% of all diagnoses and 35,000 total deaths in 2024.
Traditional screening methods such as prostate-specific antigen (PSA) testing and magnetic resonance imaging (MRI) have been pivotal in diagnosis, but have faced limitations in specificity and generalizability.
We employ several state-of-the-art deep learning models, including U-Net, SegResNet, Swin UNETR, Attention U-Net, and LightM-UNet, to segment PCa lesions from a 200 CDI$
arXiv Detail & Related papers (2025-01-15T22:23:41Z) - A Knowledge-enhanced Pathology Vision-language Foundation Model for Cancer Diagnosis [58.85247337449624]
We propose a knowledge-enhanced vision-language pre-training approach that integrates disease knowledge into the alignment within hierarchical semantic groups.
KEEP achieves state-of-the-art performance in zero-shot cancer diagnostic tasks.
arXiv Detail & Related papers (2024-12-17T17:45:21Z) - Optimizing Synthetic Correlated Diffusion Imaging for Breast Cancer Tumour Delineation [71.91773485443125]
We show that the best AUC is achieved by the CDI$s$ - optimized modality, outperforming the best gold-standard modality by 0.0044.
Notably, the optimized CDI$s$ modality also achieves AUC values over 0.02 higher than the Unoptimized CDI$s$ value.
arXiv Detail & Related papers (2024-05-13T16:07:58Z) - Improving Breast Cancer Grade Prediction with Multiparametric MRI Created Using Optimized Synthetic Correlated Diffusion Imaging [71.91773485443125]
Grading plays a vital role in breast cancer treatment planning.
The current tumor grading method involves extracting tissue from patients, leading to stress, discomfort, and high medical costs.
This paper examines using optimized CDI$s$ to improve breast cancer grade prediction.
arXiv Detail & Related papers (2024-05-13T15:48:26Z) - Applications of artificial intelligence in the analysis of histopathology images of gliomas: a review [0.33999813472511115]
This review examines 83 publicly available research studies that have proposed AI-based methods for whole-slide histopathology images of human gliomas.
The focus of current research is the assessment of hematoxylin and eosin-stained tissue sections of adult-type diffuse gliomas.
So far, AI-based methods have achieved promising results, but are not yet used in real clinical settings.
arXiv Detail & Related papers (2024-01-26T17:29:01Z) - Diagnosing Bipolar Disorder from 3-D Structural Magnetic Resonance
Images Using a Hybrid GAN-CNN Method [0.0]
This study proposes a hybrid GAN-CNN model to diagnose Bipolar Disorder (BD) from 3-D structural MRI Images (sMRI)
Based on the results, this study obtains an accuracy rate of 75.8%, a sensitivity of 60.3%, and a specificity of 82.5%, which are 3-5% higher than prior work.
arXiv Detail & Related papers (2023-10-11T10:17:41Z) - Developing a Novel Image Marker to Predict the Clinical Outcome of Neoadjuvant Chemotherapy (NACT) for Ovarian Cancer Patients [1.7623658472574557]
Neoadjuvant chemotherapy (NACT) is one kind of treatment for advanced stage ovarian cancer patients.
Partial responses to NACT may lead to suboptimal debulking surgery, which will result in adverse prognosis.
We developed a novel image marker to achieve high accuracy prognosis prediction of NACT at an early stage.
arXiv Detail & Related papers (2023-09-13T16:59:50Z) - Towards More Transparent and Accurate Cancer Diagnosis with an
Unsupervised CAE Approach [1.6704594205447996]
Digital pathology has revolutionized cancer diagnosis by leveraging Content-Based Medical Image Retrieval (CBMIR)
UCBMIR replicates the traditional cancer diagnosis workflow, offering a dependable method to support pathologists in WSI-based diagnostic conclusions.
arXiv Detail & Related papers (2023-05-19T15:04:16Z) - Multi-Scale Hybrid Vision Transformer for Learning Gastric Histology:
AI-Based Decision Support System for Gastric Cancer Treatment [50.89811515036067]
Gastric endoscopic screening is an effective way to decide appropriate gastric cancer (GC) treatment at an early stage, reducing GC-associated mortality rate.
We propose a practical AI system that enables five subclassifications of GC pathology, which can be directly matched to general GC treatment guidance.
arXiv Detail & Related papers (2022-02-17T08:33:52Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.