CompNet: A Designated Model to Handle Combinations of Images and
Designed features
- URL: http://arxiv.org/abs/2209.14454v1
- Date: Wed, 28 Sep 2022 22:43:22 GMT
- Title: CompNet: A Designated Model to Handle Combinations of Images and
Designed features
- Authors: Bowen Qiu, Daniela Raicu, Jacob Furst, Roselyne Tchoua
- Abstract summary: We propose a new structure of CNN-based model: CompNet, a composite convolutional neural network.
With the use of this structure on classification tasks, the results indicate that our approach has the capability to significantly reduce overfitting.
- Score: 0.24596929878045565
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Convolutional neural networks (CNNs) are one of the most popular models of
Artificial Neural Networks (ANN)s in Computer Vision (CV). A variety of
CNN-based structures were developed by researchers to solve problems like image
classification, object detection, and image similarity measurement. Although
CNNs have shown their value in most cases, they still have a downside: they
easily overfit when there are not enough samples in the dataset. Most medical
image datasets are examples of such a dataset. Additionally, many datasets also
contain both designed features and images, but CNNs can only deal with images
directly. This represents a missed opportunity to leverage additional
information. For this reason, we propose a new structure of CNN-based model:
CompNet, a composite convolutional neural network. This is a specially designed
neural network that accepts combinations of images and designed features as
input in order to leverage all available information. The novelty of this
structure is that it uses learned features from images to weight designed
features in order to gain all information from both images and designed
features. With the use of this structure on classification tasks, the results
indicate that our approach has the capability to significantly reduce
overfitting. Furthermore, we also found several similar approaches proposed by
other researchers that can combine images and designed features. To make
comparison, we first applied those similar approaches on LIDC and compared the
results with the CompNet results, then we applied our CompNet on the datasets
that those similar approaches originally used in their works and compared the
results with the results they proposed in their papers. All these comparison
results showed that our model outperformed those similar approaches on
classification tasks either on LIDC dataset or on their proposed datasets.
Related papers
- Combined CNN and ViT features off-the-shelf: Another astounding baseline for recognition [49.14350399025926]
We apply pre-trained architectures, originally developed for the ImageNet Large Scale Visual Recognition Challenge, for periocular recognition.
Middle-layer features from CNNs and ViTs are a suitable way to recognize individuals based on periocular images.
arXiv Detail & Related papers (2024-07-28T11:52:36Z) - Fuzzy Convolution Neural Networks for Tabular Data Classification [0.0]
Convolutional neural networks (CNNs) have attracted a great deal of attention due to their remarkable performance in various domains.
In this paper, we propose a novel framework fuzzy convolution neural network (FCNN) tailored specifically for tabular data.
arXiv Detail & Related papers (2024-06-04T20:33:35Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Deep ensembles in bioimage segmentation [74.01883650587321]
In this work, we propose an ensemble of convolutional neural networks (CNNs)
In ensemble methods, many different models are trained and then used for classification, the ensemble aggregates the outputs of the single classifiers.
The proposed ensemble is implemented by combining different backbone networks using the DeepLabV3+ and HarDNet environment.
arXiv Detail & Related papers (2021-12-24T05:54:21Z) - On the Effectiveness of Neural Ensembles for Image Classification with
Small Datasets [2.3478438171452014]
We focus on image classification problems with a few labeled examples per class and improve data efficiency by using an ensemble of relatively small networks.
We show that ensembling relatively shallow networks is a simple yet effective technique that is generally better than current state-of-the-art approaches for learning from small datasets.
arXiv Detail & Related papers (2021-11-29T12:34:49Z) - Deep Features for training Support Vector Machine [16.795405355504077]
This paper develops a generic computer vision system based on features extracted from trained CNNs.
Multiple learned features are combined into a single structure to work on different image classification tasks.
arXiv Detail & Related papers (2021-04-08T03:13:09Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Combining pretrained CNN feature extractors to enhance clustering of
complex natural images [27.784346095205358]
This paper aims at providing insight on the use of pretrained CNN features for image clustering (IC)
To solve this issue, we propose to rephrase the IC problem as a multi-view clustering (MVC) problem.
We then propose a multi-input neural network architecture that is trained end-to-end to solve the MVC problem effectively.
arXiv Detail & Related papers (2021-01-07T21:23:04Z) - Fusion of CNNs and statistical indicators to improve image
classification [65.51757376525798]
Convolutional Networks have dominated the field of computer vision for the last ten years.
Main strategy to prolong this trend relies on further upscaling networks in size.
We hypothesise that adding heterogeneous sources of information may be more cost-effective to a CNN than building a bigger network.
arXiv Detail & Related papers (2020-12-20T23:24:31Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.