Enrichment of the NLST and NSCLC-Radiomics computed tomography
collections with AI-derived annotations
- URL: http://arxiv.org/abs/2306.00150v1
- Date: Wed, 31 May 2023 19:46:18 GMT
- Title: Enrichment of the NLST and NSCLC-Radiomics computed tomography
collections with AI-derived annotations
- Authors: Deepa Krishnaswamy, Dennis Bontempi, Vamsi Thiriveedhi, Davide Punzo,
David Clunie, Christopher P Bridge, Hugo JWL Aerts, Ron Kikinis, Andrey
Fedorov
- Abstract summary: We introduce AI-generated annotations for two collections of computed tomography images of the chest, NSCLC-Radiomics, and the National Lung Screening Trial.
The resulting annotations are publicly available within NCI Imaging Data Commons (IDC), where the DICOM format is used to harmonize the data and achieve FAIR principles.
This study reinforces the need for large, publicly curated datasets and demonstrates how AI can be used to aid in cancer imaging.
- Score: 0.16863755729554886
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Public imaging datasets are critical for the development and evaluation of
automated tools in cancer imaging. Unfortunately, many do not include
annotations or image-derived features, complicating their downstream analysis.
Artificial intelligence-based annotation tools have been shown to achieve
acceptable performance and thus can be used to automatically annotate large
datasets. As part of the effort to enrich public data available within NCI
Imaging Data Commons (IDC), here we introduce AI-generated annotations for two
collections of computed tomography images of the chest, NSCLC-Radiomics, and
the National Lung Screening Trial. Using publicly available AI algorithms we
derived volumetric annotations of thoracic organs at risk, their corresponding
radiomics features, and slice-level annotations of anatomical landmarks and
regions. The resulting annotations are publicly available within IDC, where the
DICOM format is used to harmonize the data and achieve FAIR principles. The
annotations are accompanied by cloud-enabled notebooks demonstrating their use.
This study reinforces the need for large, publicly accessible curated datasets
and demonstrates how AI can be used to aid in cancer imaging.
Related papers
- AI generated annotations for Breast, Brain, Liver, Lungs and Prostate cancer collections in National Cancer Institute Imaging Data Commons [0.09462026329066188]
The AI in Medical Imaging project aims to enhance the National Cancer Institute's (NCI) Image Data Commons (IDC)
We created high-quality, AI-annotated imaging datasets for 11 IDC collections.
arXiv Detail & Related papers (2024-09-30T14:43:09Z) - AbdomenAtlas: A Large-Scale, Detailed-Annotated, & Multi-Center Dataset for Efficient Transfer Learning and Open Algorithmic Benchmarking [16.524596737411006]
We introduce the largest abdominal CT dataset (termed AbdomenAtlas) of 20,460 three-dimensional CT volumes from 112 hospitals across diverse populations, geographies, and facilities.
AbamenAtlas provides 673K high-quality masks of anatomical structures in the abdominal region annotated by a team of 10 radiologists with the help of AI algorithms.
arXiv Detail & Related papers (2024-07-23T17:59:44Z) - Radiology Report Generation Using Transformers Conditioned with
Non-imaging Data [55.17268696112258]
This paper proposes a novel multi-modal transformer network that integrates chest x-ray (CXR) images and associated patient demographic information.
The proposed network uses a convolutional neural network to extract visual features from CXRs and a transformer-based encoder-decoder network that combines the visual features with semantic text embeddings of patient demographic information.
arXiv Detail & Related papers (2023-11-18T14:52:26Z) - Finding-Aware Anatomical Tokens for Chest X-Ray Automated Reporting [13.151444796296868]
We introduce a novel adaptation of Faster R-CNN in which finding detection is performed for the candidate bounding boxes extracted during anatomical structure localisation.
We use the resulting bounding box feature representations as our set of finding-aware anatomical tokens.
We show that task-aware anatomical tokens give state-of-the-art performance when integrated into an automated reporting pipeline.
arXiv Detail & Related papers (2023-08-30T11:35:21Z) - Building RadiologyNET: Unsupervised annotation of a large-scale
multimodal medical database [0.4915744683251151]
The usage of machine learning in medical diagnosis and treatment has witnessed significant growth in recent years.
However, the availability of large annotated image datasets remains a major obstacle since the process of annotation is time-consuming and costly.
This paper explores how to automatically annotate a database of medical radiology images with regard to their semantic similarity.
arXiv Detail & Related papers (2023-07-27T13:00:33Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Generative Residual Attention Network for Disease Detection [51.60842580044539]
We present a novel approach for disease generation in X-rays using a conditional generative adversarial learning.
We generate a corresponding radiology image in a target domain while preserving the identity of the patient.
We then use the generated X-ray image in the target domain to augment our training to improve the detection performance.
arXiv Detail & Related papers (2021-10-25T14:15:57Z) - Artificial Intelligence For Breast Cancer Detection: Trends & Directions [0.0]
This article analyzes different imaging modalities that have been exploited by researchers to automate the task of breast cancer detection.
This article then summarizes AI and computer vision based state-of-the-art methods proposed in the last decade, to detect breast cancer.
arXiv Detail & Related papers (2021-10-03T07:22:21Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Y-Net for Chest X-Ray Preprocessing: Simultaneous Classification of
Geometry and Segmentation of Annotations [70.0118756144807]
This work introduces a general pre-processing step for chest x-ray input into machine learning algorithms.
A modified Y-Net architecture based on the VGG11 encoder is used to simultaneously learn geometric orientation and segmentation of radiographs.
Results were evaluated by expert clinicians, with acceptable geometry in 95.8% and annotation mask in 96.2%, compared to 27.0% and 34.9% respectively in control images.
arXiv Detail & Related papers (2020-05-08T02:16:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.