OOOE: Only-One-Object-Exists Assumption to Find Very Small Objects in
Chest Radiographs
- URL: http://arxiv.org/abs/2210.06806v1
- Date: Thu, 13 Oct 2022 07:37:33 GMT
- Title: OOOE: Only-One-Object-Exists Assumption to Find Very Small Objects in
Chest Radiographs
- Authors: Gunhee Nam, Taesoo Kim, Sanghyup Lee, Thijs Kooi
- Abstract summary: Many foreign objects like tubes and various anatomical structures are small in comparison to the entire chest X-ray.
We present a simple yet effective Only-One-Object-Exists' (OOOE) assumption to improve the deep network's ability to localize small landmarks in chest radiographs.
- Score: 9.226276232505734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The accurate localization of inserted medical tubes and parts of human
anatomy is a common problem when analyzing chest radiographs and something deep
neural networks could potentially automate. However, many foreign objects like
tubes and various anatomical structures are small in comparison to the entire
chest X-ray, which leads to severely unbalanced data and makes training deep
neural networks difficult. In this paper, we present a simple yet effective
`Only-One-Object-Exists' (OOOE) assumption to improve the deep network's
ability to localize small landmarks in chest radiographs. The OOOE enables us
to recast the localization problem as a classification problem and we can
replace commonly used continuous regression techniques with a multi-class
discrete objective. We validate our approach using a large scale proprietary
dataset of over 100K radiographs as well as publicly available RANZCR-CLiP
Kaggle Challenge dataset and show that our method consistently outperforms
commonly used regression-based detection models as well as commonly used
pixel-wise classification methods. Additionally, we find that the method using
the OOOE assumption generalizes to multiple detection problems in chest X-rays
and the resulting model shows state-of-the-art performance on detecting various
tube tips inserted to the patient as well as patient anatomy.
Related papers
- Effort: Efficient Orthogonal Modeling for Generalizable AI-Generated Image Detection [66.16595174895802]
Existing AI-generated image (AIGI) detection methods often suffer from limited generalization performance.
In this paper, we identify a crucial yet previously overlooked asymmetry phenomenon in AIGI detection.
arXiv Detail & Related papers (2024-11-23T19:10:32Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - An Efficient Anchor-free Universal Lesion Detection in CT-scans [19.165942326142538]
We propose a robust one-stage anchor-free lesion detection network that can perform well across varying lesions sizes.
We obtain comparable results to the state-of-the-art methods, achieving an overall sensitivity of 86.05% on the DeepLesion dataset.
arXiv Detail & Related papers (2022-03-30T06:01:04Z) - DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust
Universal Lesion Detection [19.165942326142538]
We propose a robust universal lesion detection (ULD) network that can detect lesions across all organs of the body by training on a single dataset, DeepLesion.
We analyze CT-slices of varying intensities, generated using a novel convolution augmented multi-head self-attention module.
We evaluate the efficacy of our network on the publicly available DeepLesion dataset which comprises of approximately 32K CT scans with annotated lesions across all organs of the body.
arXiv Detail & Related papers (2022-03-14T06:54:28Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Multiscale Detection of Cancerous Tissue in High Resolution Slide Scans [0.0]
We present an algorithm for multi-scale tumor (chimeric cell) detection in high resolution slide scans.
Our approach modifies the effective receptive field at different layers in a CNN so that objects with a broad range of varying scales can be detected in a single forward pass.
arXiv Detail & Related papers (2020-10-01T18:56:46Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Learning Invariant Feature Representation to Improve Generalization
across Chest X-ray Datasets [55.06983249986729]
We show that a deep learning model performing well when tested on the same dataset as training data starts to perform poorly when it is tested on a dataset from a different source.
By employing an adversarial training strategy, we show that a network can be forced to learn a source-invariant representation.
arXiv Detail & Related papers (2020-08-04T07:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.