Learning Efficient, Explainable and Discriminative Representations for
Pulmonary Nodules Classification
- URL: http://arxiv.org/abs/2101.07429v1
- Date: Tue, 19 Jan 2021 02:53:44 GMT
- Title: Learning Efficient, Explainable and Discriminative Representations for
Pulmonary Nodules Classification
- Authors: Hanliang Jiang, Fuhao Shen, Fei Gao, Weidong Han
- Abstract summary: In this work, we aim to build an efficient and (partially) explainable classification model.
We use emphneural architecture search (NAS) to automatically search 3D network architectures with excellent accuracy/speed trade-off.
In the inference stage, we employ an ensemble of diverse neural networks to improve the prediction accuracy and robustness.
- Score: 2.4565395352560895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic pulmonary nodules classification is significant for early diagnosis
of lung cancers. Recently, deep learning techniques have enabled remarkable
progress in this field. However, these deep models are typically of high
computational complexity and work in a black-box manner. To combat these
challenges, in this work, we aim to build an efficient and (partially)
explainable classification model. Specially, we use \emph{neural architecture
search} (NAS) to automatically search 3D network architectures with excellent
accuracy/speed trade-off. Besides, we use the convolutional block attention
module (CBAM) in the networks, which helps us understand the reasoning process.
During training, we use A-Softmax loss to learn angularly discriminative
representations. In the inference stage, we employ an ensemble of diverse
neural networks to improve the prediction accuracy and robustness. We conduct
extensive experiments on the LIDC-IDRI database. Compared with previous
state-of-the-art, our model shows highly comparable performance by using less
than 1/40 parameters. Besides, empirical study shows that the reasoning process
of learned networks is in conformity with physicians' diagnosis. Related code
and results have been released at: https://github.com/fei-hdu/NAS-Lung.
Related papers
- Analysis of Modern Computer Vision Models for Blood Cell Classification [49.1574468325115]
This study uses state-of-the-art architectures, including MaxVit, EfficientVit, EfficientNet, EfficientNetV2, and MobileNetV3 to achieve rapid and accurate results.
Our approach not only addresses the speed and accuracy concerns of traditional techniques but also explores the applicability of innovative deep learning models in hematological analysis.
arXiv Detail & Related papers (2024-06-30T16:49:29Z) - Modular Neural Network Approaches for Surgical Image Recognition [0.0]
We introduce and evaluate different architectures of modular learning for Dorsal Capsulo-Scapholunate Septum (DCSS) instability classification.
Our experiments have shown that modular learning improves performances compared to non-modular systems.
In the second part, we present our approach for data labeling and segmentation with self-training applied on shoulder arthroscopy images.
arXiv Detail & Related papers (2023-07-17T22:28:16Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Towards better Interpretable and Generalizable AD detection using
Collective Artificial Intelligence [0.0]
Deep learning methods have been proposed to automate diagnosis and prognosis of Alzheimer's disease.
These methods often suffer from a lack of interpretability and generalization.
We propose a novel deep framework designed to overcome these limitations.
arXiv Detail & Related papers (2022-06-07T13:02:53Z) - Compare Where It Matters: Using Layer-Wise Regularization To Improve
Federated Learning on Heterogeneous Data [0.0]
Federated Learning is a widely adopted method to train neural networks over distributed data.
One main limitation is the performance degradation that occurs when data is heterogeneously distributed.
We present FedCKA: a framework that out-performs previous state-of-the-art methods on various deep learning tasks.
arXiv Detail & Related papers (2021-12-01T10:46:13Z) - Medulloblastoma Tumor Classification using Deep Transfer Learning with
Multi-Scale EfficientNets [63.62764375279861]
We propose an end-to-end MB tumor classification and explore transfer learning with various input sizes and matching network dimensions.
Using a data set with 161 cases, we demonstrate that pre-trained EfficientNets with larger input resolutions lead to significant performance improvements.
arXiv Detail & Related papers (2021-09-10T13:07:11Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Learning Interpretable Microscopic Features of Tumor by Multi-task
Adversarial CNNs To Improve Generalization [1.7371375427784381]
Existing CNN models act as black boxes, not ensuring to the physicians that important diagnostic features are used by the model.
Here we show that our architecture, by learning end-to-end an uncertainty-based weighting combination of multi-task and adversarial losses, is encouraged to focus on pathology features.
Our results on breast lymph node tissue show significantly improved generalization in the detection of tumorous tissue, with best average AUC 0.89 (0.01) against the baseline AUC 0.86 (0.005)
arXiv Detail & Related papers (2020-08-04T12:10:35Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Neural Architecture Search For Fault Diagnosis [6.226564415963648]
Deep learning is suitable for processing big data, and has a strong feature extraction ability to realize end-to-end fault diagnosis systems.
Neural architecture search (NAS) is developing rapidly, and is becoming one of the next directions for deep learning.
In this paper, we proposed a NAS method for fault diagnosis using reinforcement learning.
arXiv Detail & Related papers (2020-02-19T04:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.