Learning the Imaging Landmarks: Unsupervised Key point Detection in Lung
Ultrasound Videos
- URL: http://arxiv.org/abs/2106.06987v1
- Date: Sun, 13 Jun 2021 13:27:12 GMT
- Title: Learning the Imaging Landmarks: Unsupervised Key point Detection in Lung
Ultrasound Videos
- Authors: Arpan Tripathi, Mahesh Raveendranatha Panicker, Abhilash R
Hareendranathan, Yale Tung Chen, Jacob L Jaremko, Kiran Vishnu Narayan and
Kesavadas C
- Abstract summary: Lung ultrasound (LUS) is an increasingly popular diagnostic imaging modality for continuous and periodic monitoring of lung infection.
Key landmarks assessed by clinicians for triaging using LUS are pleura, A and B lines.
This work is a first of its kind attempt towards unsupervised detection of the key LUS landmarks in LUS videos of COVID-19 subjects during various stages of infection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Lung ultrasound (LUS) is an increasingly popular diagnostic imaging modality
for continuous and periodic monitoring of lung infection, given its advantages
of non-invasiveness, non-ionizing nature, portability and easy disinfection.
The major landmarks assessed by clinicians for triaging using LUS are pleura, A
and B lines. There have been many efforts for the automatic detection of these
landmarks. However, restricting to a few pre-defined landmarks may not reveal
the actual imaging biomarkers particularly in case of new pathologies like
COVID-19. Rather, the identification of key landmarks should be driven by data
given the availability of a plethora of neural network algorithms. This work is
a first of its kind attempt towards unsupervised detection of the key LUS
landmarks in LUS videos of COVID-19 subjects during various stages of
infection. We adapted the relatively newer approach of transporter neural
networks to automatically mark and track pleura, A and B lines based on their
periodic motion and relatively stable appearance in the videos. Initial results
on unsupervised pleura detection show an accuracy of 91.8% employing 1081 LUS
video frames.
Related papers
- COVID-Net USPro: An Open-Source Explainable Few-Shot Deep Prototypical
Network to Monitor and Detect COVID-19 Infection from Point-of-Care
Ultrasound Images [66.63200823918429]
COVID-Net USPro monitors and detects COVID-19 positive cases with high precision and recall from minimal ultrasound images.
The network achieves 99.65% overall accuracy, 99.7% recall and 99.67% precision for COVID-19 positive cases when trained with only 5 shots.
arXiv Detail & Related papers (2023-01-04T16:05:51Z) - MTCD: Cataract Detection via Near Infrared Eye Images [69.62768493464053]
cataract is a common eye disease and one of the leading causes of blindness and vision impairment.
We present a novel algorithm for cataract detection using near-infrared eye images.
Deep learning-based eye segmentation and multitask network classification networks are presented.
arXiv Detail & Related papers (2021-10-06T08:10:28Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - An Approach Towards Physics Informed Lung Ultrasound Image Scoring
Neural Network for Diagnostic Assistance in COVID-19 [0.0]
A novel approach is presented to extract acoustic propagation-based features to highlight the region below pleura in lung ultrasound (LUS)
A neural network, referred to as LUSNet, is trained to classify the LUS images into five classes of varying severity of lung infection to track the progression of COVID-19.
A detailed analysis of the proposed approach on LUS images over the infection to full recovery period of ten confirmed COVID-19 subjects shows an average five-fold cross-validation accuracy, sensitivity, and specificity of 97%, 93%, and 98% respectively over 5000 frames of COVID-19 videos.
arXiv Detail & Related papers (2021-06-13T13:01:53Z) - An interpretable object detection based model for the diagnosis of
neonatal lung diseases using Ultrasound images [0.0]
Lung Ultrasound (LUS) has been increasingly used to diagnose and monitor different lung diseases in neonates.
Mixed artifact patterns found in different respiratory diseases may limit LUS readability by the operator.
We present a unique approach for extracting seven meaningful LUS features that can be easily associated with a specific lung condition.
arXiv Detail & Related papers (2021-05-21T01:12:35Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - COVID-19 Detection from Chest X-ray Images using Imprinted Weights
Approach [67.05664774727208]
Chest radiography is an alternative screening method for the COVID-19.
Computer-aided diagnosis (CAD) has proven to be a viable solution at low cost and with fast speed.
To address this challenge, we propose the use of a low-shot learning approach named imprinted weights.
arXiv Detail & Related papers (2021-05-04T19:01:40Z) - Automatic Detection of B-lines in Lung Ultrasound Videos From Severe
Dengue Patients [0.6775616141339018]
We propose a novel methodology to automatically detect and localize B-lines in lung ultrasound (LUS) videos.
We combine a convolutional neural network (CNN) with a long short-term memory (LSTM) network and a temporal attention mechanism.
Our best model can determine whether one-second clips contain B-lines or not with an F1 score of 0.81, and extracts a representative frame with B-lines with an accuracy of 87.5%.
arXiv Detail & Related papers (2021-02-01T18:49:23Z) - Computer-aided Tumor Diagnosis in Automated Breast Ultrasound using 3D
Detection Network [18.31577982955252]
The efficacy of our network is verified from a collected dataset of 418 patients with 145 benign tumors and 273 malignant tumors.
Experiments show our network attains a sensitivity of 97.66% with 1.23 false positives (FPs), and has an area under the curve(AUC) value of 0.8720.
arXiv Detail & Related papers (2020-07-31T15:25:07Z) - Spectral-Spatial Recurrent-Convolutional Networks for In-Vivo
Hyperspectral Tumor Type Classification [49.32653090178743]
We demonstrate the feasibility of in-vivo tumor type classification using hyperspectral imaging and deep learning.
Our best model achieves an AUC of 76.3%, significantly outperforming previous conventional and deep learning methods.
arXiv Detail & Related papers (2020-07-02T12:00:53Z) - Pseudo-Labeling for Small Lesion Detection on Diabetic Retinopathy
Images [12.49381528673824]
Diabetic retinopathy (DR) is a primary cause of blindness in working-age people worldwide.
About 3 to 4 million people with diabetes become blind because of DR every year.
Diagnosis of DR through color fundus images is a common approach to mitigate such problem.
arXiv Detail & Related papers (2020-03-26T17:13:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.