Let it shine: Autofluorescence of Papanicolaou-stain improves AI-based cytological oral cancer detection
- URL: http://arxiv.org/abs/2407.01869v1
- Date: Tue, 2 Jul 2024 01:05:35 GMT
- Title: Let it shine: Autofluorescence of Papanicolaou-stain improves AI-based cytological oral cancer detection
- Authors: Wenyi Lian, Joakim Lindblad, Christina Runow Stark, Jan-Michaél Hirsch, Nataša Sladoje,
- Abstract summary: Oral cancer is treatable if detected early, but it is often fatal in late stages.
Computer-assisted methods are essential for cost-effective and accurate cytological analysis.
This study aims to improve AI-based oral cancer detection using multimodal imaging and deep fusion.
- Score: 3.1850395068284785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Oral cancer is a global health challenge. It is treatable if detected early, but it is often fatal in late stages. There is a shift from the invasive and time-consuming tissue sampling and histological examination, toward non-invasive brush biopsies and cytological examination. Reliable computer-assisted methods are essential for cost-effective and accurate cytological analysis, but the lack of detailed cell-level annotations impairs model effectiveness. This study aims to improve AI-based oral cancer detection using multimodal imaging and deep fusion. We combine brightfield and fluorescence whole slide microscopy imaging to analyze Papanicolaou-stained liquid-based cytology slides of brush biopsies collected from both healthy and cancer patients. Due to limited cytological annotations, we utilize a weakly supervised deep learning approach using only patient-level labels. We evaluate various multimodal fusion strategies, including early, late, and three recent intermediate fusion methods. Our results show: (i) fluorescence imaging of Papanicolaou-stained samples provides substantial diagnostic information; (ii) multimodal fusion enhances classification and cancer detection accuracy over single-modality methods. Intermediate fusion is the leading method among the studied approaches. Specifically, the Co-Attention Fusion Network (CAFNet) model excels with an F1 score of 83.34% and accuracy of 91.79%, surpassing human performance on the task. Additional tests highlight the need for precise image registration to optimize multimodal analysis benefits. This study advances cytopathology by combining deep learning and multimodal imaging to enhance early, non-invasive detection of oral cancer, improving diagnostic accuracy and streamlining clinical workflows. The developed pipeline is also applicable in other cytological settings. Our codes and dataset are available online for further research.
Related papers
- Boosting Medical Image-based Cancer Detection via Text-guided Supervision from Reports [68.39938936308023]
We propose a novel text-guided learning method to achieve highly accurate cancer detection results.
Our approach can leverage clinical knowledge by large-scale pre-trained VLM to enhance generalization ability.
arXiv Detail & Related papers (2024-05-23T07:03:38Z) - MMFusion: Multi-modality Diffusion Model for Lymph Node Metastasis Diagnosis in Esophageal Cancer [13.74067035373274]
We introduce a multi-modal heterogeneous graph-based conditional feature-guided diffusion model for lymph node metastasis diagnosis based on CT images.
We propose a masked relational representation learning strategy, aiming to uncover the latent prognostic correlations and priorities of primary tumor and lymph node image representations.
arXiv Detail & Related papers (2024-05-15T17:52:00Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - Improved Multimodal Fusion for Small Datasets with Auxiliary Supervision [3.8750633583374143]
We propose three simple methods for improved multimodal fusion with small datasets.
The proposed methods are straightforward to implement and can be applied to any classification task with paired image and non-image data.
arXiv Detail & Related papers (2023-04-01T20:07:10Z) - Artificial-intelligence-based molecular classification of diffuse
gliomas using rapid, label-free optical imaging [59.79875531898648]
DeepGlioma is an artificial-intelligence-based diagnostic screening system.
DeepGlioma can predict the molecular alterations used by the World Health Organization to define the adult-type diffuse glioma taxonomy.
arXiv Detail & Related papers (2023-03-23T18:50:18Z) - RadioPathomics: Multimodal Learning in Non-Small Cell Lung Cancer for
Adaptive Radiotherapy [1.8161758803237067]
We develop a multimodal late fusion approach to predict radiation therapy outcomes for non-small-cell lung cancer patients.
Experiments show that the proposed multimodal paradigm with an AUC equal to $90.9%$ outperforms each unimodal approach.
arXiv Detail & Related papers (2022-04-26T16:32:52Z) - Lymphocyte Classification in Hyperspectral Images of Ovarian Cancer
Tissue Biopsy Samples [94.37521840642141]
We present a machine learning pipeline to segment white blood cell pixels in hyperspectral images of biopsy cores.
These cells are clinically important for diagnosis, but some prior work has struggled to incorporate them due to difficulty obtaining precise pixel labels.
arXiv Detail & Related papers (2022-03-23T00:58:27Z) - Oral cancer detection and interpretation: Deep multiple instance
learning versus conventional deep single instance learning [2.2612425542955292]
Current medical standard for setting an oral cancer (OC) diagnosis is histological examination of a tissue sample from the oral cavity.
To introduce this approach into clinical routine is associated with challenges such as a lack of experts and labour-intensive work.
We are interested in AI-based methods that reliably can detect cancer given only per-patient labels.
arXiv Detail & Related papers (2022-02-03T15:04:26Z) - EMT-NET: Efficient multitask network for computer-aided diagnosis of
breast cancer [58.720142291102135]
We propose an efficient and light-weighted learning architecture to classify and segment breast tumors simultaneously.
We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions.
The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively.
arXiv Detail & Related papers (2022-01-13T05:24:40Z) - Unsupervised deep learning techniques for powdery mildew recognition
based on multispectral imaging [63.62764375279861]
This paper presents a deep learning approach to automatically recognize powdery mildew on cucumber leaves.
We focus on unsupervised deep learning techniques applied to multispectral imaging data.
We propose the use of autoencoder architectures to investigate two strategies for disease detection.
arXiv Detail & Related papers (2021-12-20T13:29:13Z) - A Novel and Efficient Tumor Detection Framework for Pancreatic Cancer
via CT Images [21.627818410241552]
A novel and efficient pancreatic tumor detection framework is proposed in this paper.
The contribution of the proposed method mainly consists of three components: Augmented Feature Pyramid networks, Self-adaptive Feature Fusion and a Dependencies Computation Module.
Experimental results achieve competitive performance in detection with the AUC of 0.9455, which outperforms other state-of-the-art methods to our best of knowledge.
arXiv Detail & Related papers (2020-02-11T15:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.