T-Fusion Net: A Novel Deep Neural Network Augmented with Multiple
Localizations based Spatial Attention Mechanisms for Covid-19 Detection
- URL: http://arxiv.org/abs/2308.00053v1
- Date: Mon, 31 Jul 2023 18:18:01 GMT
- Title: T-Fusion Net: A Novel Deep Neural Network Augmented with Multiple
Localizations based Spatial Attention Mechanisms for Covid-19 Detection
- Authors: Susmita Ghosh and Abhiroop Chatterjee
- Abstract summary: The present work proposes a new deep neural network (called as, T-Fusion Net) that augments multiple localizations based spatial attention.
A homogeneous ensemble of the said network is further used to enhance image classification accuracy.
The proposed T-Fusion Net and the homogeneous ensemble model exhibit better performance, as compared to other state-of-the-art methods.
- Score: 0.7614628596146599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, deep neural networks are yielding better performance in
image classification tasks. However, the increasing complexity of datasets and
the demand for improved performance necessitate the exploration of innovative
techniques. The present work proposes a new deep neural network (called as,
T-Fusion Net) that augments multiple localizations based spatial attention.
This attention mechanism allows the network to focus on relevant image regions,
improving its discriminative power. A homogeneous ensemble of the said network
is further used to enhance image classification accuracy. For ensembling, the
proposed approach considers multiple instances of individual T-Fusion Net. The
model incorporates fuzzy max fusion to merge the outputs of individual nets.
The fusion process is optimized through a carefully chosen parameter to strike
a balance on the contributions of the individual models. Experimental
evaluations on benchmark Covid-19 (SARS-CoV-2 CT scan) dataset demonstrate the
effectiveness of the proposed T-Fusion Net as well as its ensemble. The
proposed T-Fusion Net and the homogeneous ensemble model exhibit better
performance, as compared to other state-of-the-art methods, achieving accuracy
of 97.59% and 98.4%, respectively.
Related papers
- CCDepth: A Lightweight Self-supervised Depth Estimation Network with Enhanced Interpretability [11.076431337488973]
This study proposes a novel hybrid self-supervised depth estimation network, CCDepth, comprising convolutional neural networks (CNNs) and the white-box CRATE network.
This novel network uses CNNs and the CRATE modules to extract local and global information in images, respectively, thereby boosting learning efficiency and reducing model size.
arXiv Detail & Related papers (2024-09-30T04:19:40Z) - Affine-based Deformable Attention and Selective Fusion for Semi-dense Matching [30.272791354494373]
We introduce affine-based local attention to model cross-view deformations.
We also present selective fusion to merge local and global messages from cross attention.
arXiv Detail & Related papers (2024-05-22T17:57:37Z) - DCNN: Dual Cross-current Neural Networks Realized Using An Interactive Deep Learning Discriminator for Fine-grained Objects [48.65846477275723]
This study proposes novel dual-current neural networks (DCNN) to improve the accuracy of fine-grained image classification.
The main novel design features for constructing a weakly supervised learning backbone model DCNN include (a) extracting heterogeneous data, (b) keeping the feature map resolution unchanged, (c) expanding the receptive field, and (d) fusing global representations and local features.
arXiv Detail & Related papers (2024-05-07T07:51:28Z) - Towards Meta-Pruning via Optimal Transport [64.6060250923073]
This paper introduces a novel approach named Intra-Fusion, challenging the prevailing pruning paradigm.
We leverage the concepts of model fusion and Optimal Transport to arrive at a more effective sparse model representation.
We benchmark our results for various networks on commonly used datasets such as CIFAR-10, CIFAR-100, and ImageNet.
arXiv Detail & Related papers (2024-02-12T17:50:56Z) - ADASR: An Adversarial Auto-Augmentation Framework for Hyperspectral and
Multispectral Data Fusion [54.668445421149364]
Deep learning-based hyperspectral image (HSI) super-resolution aims to generate high spatial resolution HSI (HR-HSI) by fusing hyperspectral image (HSI) and multispectral image (MSI) with deep neural networks (DNNs)
In this letter, we propose a novel adversarial automatic data augmentation framework ADASR that automatically optimize and augments HSI-MSI sample pairs to enrich data diversity for HSI-MSI fusion.
arXiv Detail & Related papers (2023-10-11T07:30:37Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Enabling Efficient Deep Convolutional Neural Network-based Sensor Fusion
for Autonomous Driving [10.326217500172689]
Fusion between DCNNs has been proved as a promising strategy to achieve satisfactory perception accuracy.
We propose a feature disparity metric to measure the degree of feature disparity between the feature maps being fused.
We also propose a Layer-sharing technique in the deep layer that can achieve better accuracy with less computational overhead.
arXiv Detail & Related papers (2022-02-22T23:35:30Z) - Model Fusion of Heterogeneous Neural Networks via Cross-Layer Alignment [17.735593218773758]
We propose a novel model fusion framework, named CLAFusion, to fuse neural networks with a different number of layers.
Based on the cross-layer alignment, our framework balances the number of layers of neural networks before applying layer-wise model fusion.
arXiv Detail & Related papers (2021-10-29T05:02:23Z) - Ensembles of Spiking Neural Networks [0.3007949058551534]
This paper demonstrates how to construct ensembles of spiking neural networks producing state-of-the-art results.
We achieve classification accuracies of 98.71%, 100.0%, and 99.09%, on the MNIST, NMNIST and DVS Gesture datasets respectively.
We formalize spiking neural networks as GLM predictors, identifying a suitable representation for their target domain.
arXiv Detail & Related papers (2020-10-15T17:45:18Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.