CEC-CNN: A Consecutive Expansion-Contraction Convolutional Network for
Very Small Resolution Medical Image Classification
- URL: http://arxiv.org/abs/2209.13661v1
- Date: Tue, 27 Sep 2022 20:01:12 GMT
- Title: CEC-CNN: A Consecutive Expansion-Contraction Convolutional Network for
Very Small Resolution Medical Image Classification
- Authors: Ioannis Vezakis, Antonios Vezakis, Sofia Gourtsoyianni, Vassilis
Koutoulidis, George K. Matsopoulos and Dimitrios Koutsouris
- Abstract summary: We introduce a new CNN architecture which preserves multi-scale features from deep, intermediate, and shallow layers.
Using a dataset of very low resolution patches from Pancreatic Ductal Adenocarcinoma (PDAC) CT scans we demonstrate that our network can outperform current state of the art models.
- Score: 0.8108972030676009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Convolutional Neural Networks (CNNs) for image classification
successively alternate convolutions and downsampling operations, such as
pooling layers or strided convolutions, resulting in lower resolution features
the deeper the network gets. These downsampling operations save computational
resources and provide some translational invariance as well as a bigger
receptive field at the next layers. However, an inherent side-effect of this is
that high-level features, produced at the deep end of the network, are always
captured in low resolution feature maps. The inverse is also true, as shallow
layers always contain small scale features. In biomedical image analysis
engineers are often tasked with classifying very small image patches which
carry only a limited amount of information. By their nature, these patches may
not even contain objects, with the classification depending instead on the
detection of subtle underlying patterns with an unknown scale in the image's
texture. In these cases every bit of information is valuable; thus, it is
important to extract the maximum number of informative features possible.
Driven by these considerations, we introduce a new CNN architecture which
preserves multi-scale features from deep, intermediate, and shallow layers by
utilizing skip connections along with consecutive contractions and expansions
of the feature maps. Using a dataset of very low resolution patches from
Pancreatic Ductal Adenocarcinoma (PDAC) CT scans we demonstrate that our
network can outperform current state of the art models.
Related papers
- TransResNet: Integrating the Strengths of ViTs and CNNs for High Resolution Medical Image Segmentation via Feature Grafting [6.987177704136503]
High-resolution images are preferable in medical imaging domain as they significantly improve the diagnostic capability of the underlying method.
Most of the existing deep learning-based techniques for medical image segmentation are optimized for input images having small spatial dimensions and perform poorly on high-resolution images.
We propose a parallel-in-branch architecture called TransResNet, which incorporates Transformer and CNN in a parallel manner to extract features from multi-resolution images independently.
arXiv Detail & Related papers (2024-10-01T18:22:34Z) - Multi-scale Unified Network for Image Classification [33.560003528712414]
CNNs face notable challenges in performance and computational efficiency when dealing with real-world, multi-scale image inputs.
We propose Multi-scale Unified Network (MUSN) consisting of multi-scales, a unified network, and scale-invariant constraint.
MUSN yields an accuracy increase up to 44.53% and diminishes FLOPs by 7.01-16.13% in multi-scale scenarios.
arXiv Detail & Related papers (2024-03-27T06:40:26Z) - LR-Net: A Block-based Convolutional Neural Network for Low-Resolution
Image Classification [0.0]
We develop a novel image classification architecture, composed of blocks that are designed to learn both low level and global features from noisy and low-resolution images.
Our design of the blocks was heavily influenced by Residual Connections and Inception modules in order to increase performance and reduce parameter sizes.
We have performed in-depth tests that demonstrate the presented architecture is faster and more accurate than existing cutting-edge convolutional neural networks.
arXiv Detail & Related papers (2022-07-19T20:01:11Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Multi-scale Sparse Representation-Based Shadow Inpainting for Retinal
OCT Images [0.261990490798442]
Inpainting shadowed regions cast by superficial blood vessels in retinal optical coherence tomography ( OCT) images is critical for accurate and robust machine analysis and clinical diagnosis.
Traditional sequence-based approaches such as propagating neighboring information to gradually fill in the missing regions are cost-effective.
Deep learning-based methods such as encoder-decoder networks have shown promising results in natural image inpainting tasks.
We propose a novel multi-scale shadow inpainting framework for OCT images by synergically applying sparse representation and deep learning.
arXiv Detail & Related papers (2022-02-23T09:37:14Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - CoTr: Efficiently Bridging CNN and Transformer for 3D Medical Image
Segmentation [95.51455777713092]
Convolutional neural networks (CNNs) have been the de facto standard for nowadays 3D medical image segmentation.
We propose a novel framework that efficiently bridges a bf Convolutional neural network and a bf Transformer bf (CoTr) for accurate 3D medical image segmentation.
arXiv Detail & Related papers (2021-03-04T13:34:22Z) - A Deeper Look into Convolutions via Pruning [9.89901717499058]
Modern architectures contain a very small number of fully-connected layers, often at the end, after multiple layers of convolutions.
Although this strategy already reduces the number of parameters, most of the convolutions can be eliminated as well, without suffering any loss in recognition performance.
In this work, we use the matrix characteristics based on eigenvalues in addition to the classical weight-based importance assignment approach for pruning to shed light on the internal mechanisms of a widely used family of CNNs.
arXiv Detail & Related papers (2021-02-04T18:55:03Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.