Evolving CNN Architectures: From Custom Designs to Deep Residual Models for Diverse Image Classification and Detection Tasks
- URL: http://arxiv.org/abs/2601.01099v1
- Date: Sat, 03 Jan 2026 07:45:08 GMT
- Title: Evolving CNN Architectures: From Custom Designs to Deep Residual Models for Diverse Image Classification and Detection Tasks
- Authors: Mahmudul Hasan, Mabsur Fatin Bin Hossain,
- Abstract summary: This paper presents a comparative study of a custom convolutional neural network (CNN) architecture against widely used pretrained and transfer learning CNN models.<n>The datasets span binary classification, fine-grained multiclass recognition, and object detection scenarios.<n>We analyze how architectural factors, such as network depth, residual connections, and feature extraction strategies, influence classification and localization performance.
- Score: 0.9023847175654603
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a comparative study of a custom convolutional neural network (CNN) architecture against widely used pretrained and transfer learning CNN models across five real-world image datasets. The datasets span binary classification, fine-grained multiclass recognition, and object detection scenarios. We analyze how architectural factors, such as network depth, residual connections, and feature extraction strategies, influence classification and localization performance. The results show that deeper CNN architectures provide substantial performance gains on fine-grained multiclass datasets, while lightweight pretrained and transfer learning models remain highly effective for simpler binary classification tasks. Additionally, we extend the proposed architecture to an object detection setting, demonstrating its adaptability in identifying unauthorized auto-rickshaws in real-world traffic scenes. Building upon a systematic analysis of custom CNN architectures alongside pretrained and transfer learning models, this study provides practical guidance for selecting suitable network designs based on task complexity and resource constraints.
Related papers
- Training a Custom CNN on Five Heterogeneous Image Datasets [1.4583375893645076]
This study investigates the effectiveness of CNN-based architectures across five datasets spanning agricultural and urban domains.<n>These datasets introduce varying challenges, including differences in illumination, resolution, environmental complexity, and class imbalance.<n>We evaluate a lightweight, task-specific custom CNN alongside established deep architectures, including ResNet-18 and VGG-16, trained both from scratch and using transfer learning.
arXiv Detail & Related papers (2026-01-08T08:44:17Z) - Performance Analysis of Image Classification on Bangladeshi Datasets [0.0]
Convolutional Neural Networks (CNNs) have demonstrated remarkable success in image classification tasks.<n>We present a comparative analysis of a custom-designed CNN and several widely used deep learning architectures for an image classification task.
arXiv Detail & Related papers (2026-01-07T21:15:16Z) - Tricks and Plug-ins for Gradient Boosting in Image Classification [17.43386196818751]
We introduce a novel framework for boosting CNN performance that integrates dynamic feature selection with the principles of BoostCNN.<n>Our results show that our boosted CNN variants consistently outperform conventional CNNs in both predictive performance and training speed.
arXiv Detail & Related papers (2025-07-30T17:00:05Z) - Enhanced Convolutional Neural Networks for Improved Image Classification [0.40964539027092917]
CIFAR-10 is a widely used benchmark to evaluate the performance of classification models on small-scale, multi-class datasets.<n>We propose an enhanced CNN architecture that integrates deeper convolutional blocks, batch normalization, and dropout regularization to achieve superior performance.
arXiv Detail & Related papers (2025-02-02T04:32:25Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Fusion of CNNs and statistical indicators to improve image
classification [65.51757376525798]
Convolutional Networks have dominated the field of computer vision for the last ten years.
Main strategy to prolong this trend relies on further upscaling networks in size.
We hypothesise that adding heterogeneous sources of information may be more cost-effective to a CNN than building a bigger network.
arXiv Detail & Related papers (2020-12-20T23:24:31Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z) - A Semi-Supervised Assessor of Neural Architectures [157.76189339451565]
We employ an auto-encoder to discover meaningful representations of neural architectures.
A graph convolutional neural network is introduced to predict the performance of architectures.
arXiv Detail & Related papers (2020-05-14T09:02:33Z) - Convolution Neural Network Architecture Learning for Remote Sensing
Scene Classification [22.29957803992306]
This paper proposes an automatically architecture learning procedure for remote sensing scene classification.
We introduce a learning strategy which can allow efficient search in the architecture space by means of gradient descent.
An architecture generator finally maps the set of parameters into the CNN used in our experiments.
arXiv Detail & Related papers (2020-01-27T07:42:46Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.