Automatic Endoscopic Ultrasound Station Recognition with Limited Data
- URL: http://arxiv.org/abs/2309.11820v3
- Date: Thu, 28 Dec 2023 08:43:37 GMT
- Title: Automatic Endoscopic Ultrasound Station Recognition with Limited Data
- Authors: Abhijit Ramesh, Anantha Nandanan, Nikhil Boggavarapu, Priya Nair MD,
Gilad Gressel
- Abstract summary: Pancreatic cancer is a lethal form of cancer that significantly contributes to cancer-related deaths worldwide.
Despite advances in medical imaging techniques, pancreatic cancer remains a challenging disease to detect.
Endoscopic ultrasound is the most effective diagnostic tool for detecting pancreatic cancer.
To obtain complete imaging of the pancreas, practitioners must learn to guide the endoscope into multiple "EUS stations"
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pancreatic cancer is a lethal form of cancer that significantly contributes
to cancer-related deaths worldwide. Early detection is essential to improve
patient prognosis and survival rates. Despite advances in medical imaging
techniques, pancreatic cancer remains a challenging disease to detect.
Endoscopic ultrasound (EUS) is the most effective diagnostic tool for detecting
pancreatic cancer. However, it requires expert interpretation of complex
ultrasound images to complete a reliable patient scan. To obtain complete
imaging of the pancreas, practitioners must learn to guide the endoscope into
multiple "EUS stations" (anatomical locations), which provide different views
of the pancreas. This is a difficult skill to learn, involving over 225
proctored procedures with the support of an experienced doctor. We build an
AI-assisted tool that utilizes deep learning techniques to identify these
stations of the stomach in real time during EUS procedures. This
computer-assisted diagnostic (CAD) will help train doctors more efficiently.
Historically, the challenge faced in developing such a tool has been the amount
of retrospective labeling required by trained clinicians. To solve this, we
developed an open-source user-friendly labeling web app that streamlines the
process of annotating stations during the EUS procedure with minimal effort
from the clinicians. Our research shows that employing only 43 procedures with
no hyperparameter fine-tuning obtained a balanced accuracy of 89%, comparable
to the current state of the art. In addition, we employ Grad-CAM, a
visualization technology that provides clinicians with interpretable and
explainable visualizations.
Related papers
- Large-scale cervical precancerous screening via AI-assisted cytology whole slide image analysis [11.148919818020495]
Cervical Cancer continues to be the leading gynecological malignancy, posing a persistent threat to women's health on a global scale.
Early screening via Whole Slide Image (WSI) diagnosis is critical to prevent this Cancer progression and improve survival rate.
But pathologist's single test suffers inevitable false negative due to the immense number of cells that need to be reviewed within a WSI.
arXiv Detail & Related papers (2024-07-28T15:29:07Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - Empowering Medical Imaging with Artificial Intelligence: A Review of
Machine Learning Approaches for the Detection, and Segmentation of COVID-19
Using Radiographic and Tomographic Images [2.232567376976564]
Since 2019, the global dissemination of the Coronavirus and its novel strains has resulted in a surge of new infections.
The use of X-ray and computed tomography (CT) imaging techniques is critical in diagnosing and managing COVID-19.
This paper focuses on the methodological approach of using machine learning (ML) to enhance medical imaging for COVID-19 diagnosis.
arXiv Detail & Related papers (2024-01-13T09:17:39Z) - Can GPT-4V(ision) Serve Medical Applications? Case Studies on GPT-4V for
Multimodal Medical Diagnosis [59.35504779947686]
GPT-4V is OpenAI's newest model for multimodal medical diagnosis.
Our evaluation encompasses 17 human body systems.
GPT-4V demonstrates proficiency in distinguishing between medical image modalities and anatomy.
It faces significant challenges in disease diagnosis and generating comprehensive reports.
arXiv Detail & Related papers (2023-10-15T18:32:27Z) - COVID-Net USPro: An Open-Source Explainable Few-Shot Deep Prototypical
Network to Monitor and Detect COVID-19 Infection from Point-of-Care
Ultrasound Images [66.63200823918429]
COVID-Net USPro monitors and detects COVID-19 positive cases with high precision and recall from minimal ultrasound images.
The network achieves 99.65% overall accuracy, 99.7% recall and 99.67% precision for COVID-19 positive cases when trained with only 5 shots.
arXiv Detail & Related papers (2023-01-04T16:05:51Z) - Multi-Scale Hybrid Vision Transformer for Learning Gastric Histology:
AI-Based Decision Support System for Gastric Cancer Treatment [50.89811515036067]
Gastric endoscopic screening is an effective way to decide appropriate gastric cancer (GC) treatment at an early stage, reducing GC-associated mortality rate.
We propose a practical AI system that enables five subclassifications of GC pathology, which can be directly matched to general GC treatment guidance.
arXiv Detail & Related papers (2022-02-17T08:33:52Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Review of Artificial Intelligence Techniques in Imaging Data
Acquisition, Segmentation and Diagnosis for COVID-19 [71.41929762209328]
The pandemic of coronavirus disease 2019 (COVID-19) is spreading all over the world.
Medical imaging such as X-ray and computed tomography (CT) plays an essential role in the global fight against COVID-19.
The recently emerging artificial intelligence (AI) technologies further strengthen the power of the imaging tools and help medical specialists.
arXiv Detail & Related papers (2020-04-06T15:21:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.