Convolutional Neural Networks from Image Markers
- URL: http://arxiv.org/abs/2012.12108v1
- Date: Tue, 15 Dec 2020 22:58:23 GMT
- Title: Convolutional Neural Networks from Image Markers
- Authors: Barbara C. Benato and Italos E. de Souza and Felipe L. Galv\~ao and
Alexandre X. Falc\~ao
- Abstract summary: Feature Learning from Image Markers (FLIM) was recently proposed to estimate convolutional filters, with no backpropagation, from strokes drawn by a user on very few images.
This paper extends FLIM for fully connected layers and demonstrates it on different image classification problems.
The results show that FLIM-based convolutional neural networks can outperform the same architecture trained from scratch by backpropagation.
- Score: 62.997667081978825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A technique named Feature Learning from Image Markers (FLIM) was recently
proposed to estimate convolutional filters, with no backpropagation, from
strokes drawn by a user on very few images (e.g., 1-3) per class, and
demonstrated for coconut-tree image classification. This paper extends FLIM for
fully connected layers and demonstrates it on different image classification
problems. The work evaluates marker selection from multiple users and the
impact of adding a fully connected layer. The results show that FLIM-based
convolutional neural networks can outperform the same architecture trained from
scratch by backpropagation.
Related papers
- Dual-branch PolSAR Image Classification Based on GraphMAE and Local Feature Extraction [22.39266854681996]
We propose a dual-branch classification model based on generative self-supervised learning in this paper.
The first branch is a superpixel-branch, which learns superpixel-level polarimetric representations using a generative self-supervised graph masked autoencoder.
To acquire finer classification results, a convolutional neural networks-based pixel-branch is further incorporated to learn pixel-level features.
arXiv Detail & Related papers (2024-08-08T08:17:50Z) - Feature Activation Map: Visual Explanation of Deep Learning Models for
Image Classification [17.373054348176932]
In this work, a post-hoc interpretation tool named feature activation map (FAM) is proposed.
FAM can interpret deep learning models without FC layers as a classifier.
Experiments conducted on ten deep learning models for few-shot image classification, contrastive learning image classification and image retrieval tasks demonstrate the effectiveness of the proposed FAM algorithm.
arXiv Detail & Related papers (2023-07-11T05:33:46Z) - Pushing the Efficiency Limit Using Structured Sparse Convolutions [82.31130122200578]
We propose Structured Sparse Convolution (SSC), which leverages the inherent structure in images to reduce the parameters in the convolutional filter.
We show that SSC is a generalization of commonly used layers (depthwise, groupwise and pointwise convolution) in efficient architectures''
Architectures based on SSC achieve state-of-the-art performance compared to baselines on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet classification benchmarks.
arXiv Detail & Related papers (2022-10-23T18:37:22Z) - Multilayer deep feature extraction for visual texture recognition [0.0]
This paper is focused on improving the accuracy of convolutional neural networks in texture classification.
It is done by extracting features from multiple convolutional layers of a pretrained neural network and aggregating such features using Fisher vector.
We verify the effectiveness of our method on texture classification of benchmark datasets, as well as on a practical task of Brazilian plant species identification.
arXiv Detail & Related papers (2022-08-22T03:53:43Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Fusion of evidential CNN classifiers for image classification [6.230751621285322]
We propose an information-fusion approach based on belief functions to combine convolutional neural networks.
In this approach, several pre-trained DS-based CNN architectures extract features from input images and convert them into mass functions on different frames of discernment.
arXiv Detail & Related papers (2021-08-23T15:12:26Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z) - Learning to Compose Hypercolumns for Visual Correspondence [57.93635236871264]
We introduce a novel approach to visual correspondence that dynamically composes effective features by leveraging relevant layers conditioned on the images to match.
The proposed method, dubbed Dynamic Hyperpixel Flow, learns to compose hypercolumn features on the fly by selecting a small number of relevant layers from a deep convolutional neural network.
arXiv Detail & Related papers (2020-07-21T04:03:22Z) - I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively [135.7695909882746]
We name the MAximum Discrepancy (MAD) competition.
We adaptively sample a small test set from an arbitrarily large corpus of unlabeled images.
Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers.
arXiv Detail & Related papers (2020-02-25T03:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.