Hybrid Convolution Neural Network Integrated with Pseudo-Newton Boosting for Lumbar Spine Degeneration Detection
- URL: http://arxiv.org/abs/2511.13877v1
- Date: Mon, 17 Nov 2025 19:54:52 GMT
- Title: Hybrid Convolution Neural Network Integrated with Pseudo-Newton Boosting for Lumbar Spine Degeneration Detection
- Authors: Pandiyaraju V, Abishek Karthik, Jaspin K, Kannan A, Jaime Lloret,
- Abstract summary: This paper proposes a new enhanced model architecture to perform classification of lumbar spine degeneration with DICOM images.<n>The proposed model is differentiated from traditional transfer learning methods as it incorporates a Pseudo-Newton Boosting layer.<n>The architecture achieves a significant performance boost, reaching a precision of 0.9, recall of 0.861, F1 score of 0.88, loss of 0.18, and an accuracy of 88.1%, compared to the baseline model, EfficientNet.
- Score: 3.485078676424349
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a new enhanced model architecture to perform classification of lumbar spine degeneration with DICOM images while using a hybrid approach, integrating EfficientNet and VGG19 together with custom-designed components. The proposed model is differentiated from traditional transfer learning methods as it incorporates a Pseudo-Newton Boosting layer along with a Sparsity-Induced Feature Reduction Layer that forms a multi-tiered framework, further improving feature selection and representation. The Pseudo-Newton Boosting layer makes smart variations of feature weights, with more detailed anatomical features, which are mostly left out in a transfer learning setup. In addition, the Sparsity-Induced Layer removes redundancy for learned features, producing lean yet robust representations for pathology in the lumbar spine. This architecture is novel as it overcomes the constraints in the traditional transfer learning approach, especially in the high-dimensional context of medical images, and achieves a significant performance boost, reaching a precision of 0.9, recall of 0.861, F1 score of 0.88, loss of 0.18, and an accuracy of 88.1%, compared to the baseline model, EfficientNet. This work will present the architectures, preprocessing pipeline, and experimental results. The results contribute to the development of automated diagnostic tools for medical images.
Related papers
- Leveraging Quantum-Based Architectures for Robust Diagnostics [0.0]
The objective of this study is to diagnose and differentiate kidney stones, cysts, and tumors using Computed Tomography (CT) images of the kidney.<n>We combine a pretrained ResNet50 encoder, with a Quantum Convolutional Neural Network (QCNN) to explore quantum-assisted diagnosis.
arXiv Detail & Related papers (2025-11-15T23:36:58Z) - Enhanced SegNet with Integrated Grad-CAM for Interpretable Retinal Layer Segmentation in OCT Images [0.0]
This study proposes an improved SegNet-based deep learning framework for automated and interpretable retinal layer segmentation.<n> Architectural innovations, including modified pooling strategies, enhance feature extraction from noisy OCT images.<n>Grad-CAM visualizations highlighted anatomically relevant regions, aligning segmentation with clinical biomarkers.
arXiv Detail & Related papers (2025-09-09T14:31:51Z) - Improved Brain Tumor Detection in MRI: Fuzzy Sigmoid Convolution in Deep Learning [5.350541719319564]
Fuzzy sigmoid convolution (FSC) is introduced along with two additional modules: top-of-the-funnel and middle-of-the-funnel.<n>A novel convolutional operator is central to this approach, effectively dilating the receptive field while preserving input data integrity.<n>This research offers lightweight, high-performance deep-learning models for medical imaging applications.
arXiv Detail & Related papers (2025-05-08T13:02:44Z) - Stochastic Engrams for Efficient Continual Learning with Binarized Neural Networks [4.014396794141682]
We propose a novel approach that integrates plasticityally-activated engrams as a gating mechanism for metaplastic binarized neural networks (mBNNs)<n>Our findings demonstrate (A) an improved stability vs. a trade-off, (B) a reduced memory intensiveness, and (C) an enhanced performance in binarized architectures.
arXiv Detail & Related papers (2025-03-27T12:21:00Z) - Learned Image resizing with efficient training (LRET) facilitates
improved performance of large-scale digital histopathology image
classification models [0.0]
Histologic examination plays a crucial role in oncology research and diagnostics.
Current approaches to training deep convolutional neural networks (DCNN) result in suboptimal model performance.
We introduce a novel approach that addresses the main limitations of traditional histopathology classification model training.
arXiv Detail & Related papers (2024-01-19T23:45:47Z) - Multi-view Hybrid Graph Convolutional Network for Volume-to-mesh Reconstruction in Cardiovascular MRI [43.47826598981827]
We introduce HybridVNet, a novel architecture for direct image-to-mesh extraction.<n>We show it can efficiently handle surface and volumetric meshes by encoding them as graph structures.<n>Our model combines traditional convolutional networks with variational graph generative models, deep supervision and mesh-specific regularisation.
arXiv Detail & Related papers (2023-11-22T21:51:29Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Bridging Synthetic and Real Images: a Transferable and Multiple
Consistency aided Fundus Image Enhancement Framework [61.74188977009786]
We propose an end-to-end optimized teacher-student framework to simultaneously conduct image enhancement and domain adaptation.
We also propose a novel multi-stage multi-attention guided enhancement network (MAGE-Net) as the backbones of our teacher and student network.
arXiv Detail & Related papers (2023-02-23T06:16:15Z) - Convolutional Neural Generative Coding: Scaling Predictive Coding to
Natural Images [79.07468367923619]
We develop convolutional neural generative coding (Conv-NGC)
We implement a flexible neurobiologically-motivated algorithm that progressively refines latent state maps.
We study the effectiveness of our brain-inspired neural system on the tasks of reconstruction and image denoising.
arXiv Detail & Related papers (2022-11-22T06:42:41Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.