Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance
- URL: http://arxiv.org/abs/2506.19683v2
- Date: Thu, 26 Jun 2025 14:20:13 GMT
- Title: Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance
- Authors: Xuesong Li, Dianye Huang, Yameng Zhang, Nassir Navab, Zhongliang Jiang,
- Abstract summary: This study introduces the scene graph (SG) for ultrasound images to explain image content to ordinary.<n>The predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies.<n>The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions.
- Score: 38.90025585894763
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding medical ultrasound imaging remains a long-standing challenge due to significant visual variability caused by differences in imaging and acquisition parameters. Recent advancements in large language models (LLMs) have been used to automatically generate terminology-rich summaries orientated to clinicians with sufficient physiological knowledge. Nevertheless, the increasing demand for improved ultrasound interpretability and basic scanning guidance among non-expert users, e.g., in point-of-care settings, has not yet been explored. In this study, we first introduce the scene graph (SG) for ultrasound images to explain image content to ordinary and provide guidance for ultrasound scanning. The ultrasound SG is first computed using a transformer-based one-stage method, eliminating the need for explicit object detection. To generate a graspable image explanation for ordinary, the user query is then used to further refine the abstract SG representation through LLMs. Additionally, the predicted SG is explored for its potential in guiding ultrasound scanning toward missing anatomies within the current imaging view, assisting ordinary users in achieving more standardized and complete anatomical exploration. The effectiveness of this SG-based image explanation and scanning guidance has been validated on images from the left and right neck regions, including the carotid and thyroid, across five volunteers. The results demonstrate the potential of the method to maximally democratize ultrasound by enhancing its interpretability and usability for ordinaries.
Related papers
- Uterine Ultrasound Image Captioning Using Deep Learning Techniques [0.0]
This paper investigates the use of deep learning for medical image captioning, with a particular focus on uterine ultrasound images.
Our research aims to assist medical professionals in making timely and accurate diagnoses, ultimately contributing to improved patient care.
arXiv Detail & Related papers (2024-11-21T11:41:42Z) - S-CycleGAN: Semantic Segmentation Enhanced CT-Ultrasound Image-to-Image Translation for Robotic Ultrasonography [2.07180164747172]
We introduce an advanced deep learning model, dubbed S-CycleGAN, which generates high-quality synthetic ultrasound images from computed tomography (CT) data.
The synthetic images are utilized to enhance various aspects of our development of the robot-assisted ultrasound scanning system.
arXiv Detail & Related papers (2024-06-03T10:53:45Z) - Generative Adversarial Networks in Ultrasound Imaging: Extending Field of View Beyond Conventional Limits [1.6588671405657123]
TTE ultrasound imaging faces inherent limitations, notably the trade-off between field of view (FoV) and resolution.<n>This paper introduces a novel application of conditional Generative Adversarial Networks (cGANs)<n>Our proposed cGAN architecture, termed echoGAN, demonstrates the capability to generate realistic anatomical structures through outpainting.
arXiv Detail & Related papers (2024-05-31T16:26:30Z) - Breast Ultrasound Report Generation using LangChain [58.07183284468881]
We propose the integration of multiple image analysis tools through a LangChain using Large Language Models (LLM) into the breast reporting process.
Our method can accurately extract relevant features from ultrasound images, interpret them in a clinical context, and produce comprehensive and standardized reports.
arXiv Detail & Related papers (2023-12-05T00:28:26Z) - AiAReSeg: Catheter Detection and Segmentation in Interventional
Ultrasound using Transformers [75.20925220246689]
endovascular surgeries are performed using the golden standard of Fluoroscopy, which uses ionising radiation to visualise catheters and vasculature.
This work proposes a solution using an adaptation of a state-of-the-art machine learning transformer architecture to detect and segment catheters in axial interventional Ultrasound image sequences.
arXiv Detail & Related papers (2023-09-25T19:34:12Z) - Voice-assisted Image Labelling for Endoscopic Ultrasound Classification
using Neural Networks [48.732863591145964]
We propose a multi-modal convolutional neural network architecture that labels endoscopic ultrasound (EUS) images from raw verbal comments provided by a clinician during the procedure.
Our results show a prediction accuracy of 76% at image level on a dataset with 5 different labels.
arXiv Detail & Related papers (2021-10-12T21:22:24Z) - Deep Learning for Ultrasound Beamforming [120.12255978513912]
Beamforming, the process of mapping received ultrasound echoes to the spatial image domain, lies at the heart of the ultrasound image formation chain.
Modern ultrasound imaging leans heavily on innovations in powerful digital receive channel processing.
Deep learning methods can play a compelling role in the digital beamforming pipeline.
arXiv Detail & Related papers (2021-09-23T15:15:21Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Ultrasound Image Classification using ACGAN with Small Training Dataset [0.0]
Training deep learning models requires large labeled datasets, which is often unavailable for ultrasound images.
We exploit Generative Adversarial Network (ACGAN) that combines the benefits of large data augmentation and transfer learning.
We conduct experiment on a dataset of breast ultrasound images that shows the effectiveness of the proposed approach.
arXiv Detail & Related papers (2021-01-31T11:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.