Large-Margin Representation Learning for Texture Classification
- URL: http://arxiv.org/abs/2206.08537v1
- Date: Fri, 17 Jun 2022 04:07:45 GMT
- Title: Large-Margin Representation Learning for Texture Classification
- Authors: Jonathan de Matos and Luiz Eduardo Soares de Oliveira and Alceu de
Souza Britto Junior and Alessandro Lameiras Koerich
- Abstract summary: This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
- Score: 67.94823375350433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel approach combining convolutional layers (CLs) and
large-margin metric learning for training supervised models on small datasets
for texture classification. The core of such an approach is a loss function
that computes the distances between instances of interest and support vectors.
The objective is to update the weights of CLs iteratively to learn a
representation with a large margin between classes. Each iteration results in a
large-margin discriminant model represented by support vectors based on such a
representation. The advantage of the proposed approach w.r.t. convolutional
neural networks (CNNs) is two-fold. First, it allows representation learning
with a small amount of data due to the reduced number of parameters compared to
an equivalent CNN. Second, it has a low training cost since the backpropagation
considers only support vectors. The experimental results on texture and
histopathologic image datasets have shown that the proposed approach achieves
competitive accuracy with lower computational cost and faster convergence when
compared to equivalent CNNs.
Related papers
- LiteNeXt: A Novel Lightweight ConvMixer-based Model with Self-embedding Representation Parallel for Medical Image Segmentation [2.0901574458380403]
We propose a new lightweight but efficient model, namely LiteNeXt, for medical image segmentation.
LiteNeXt is trained from scratch with small amount of parameters (0.71M) and Giga Floating Point Operations Per Second (0.42).
arXiv Detail & Related papers (2024-04-04T01:59:19Z) - Learning Partial Correlation based Deep Visual Representation for Image
Classification [61.0532370259644]
We formulate sparse inverse covariance estimation (SICE) as a novel structured layer of CNN.
Our work obtains a partial correlation based deep visual representation and mitigates the small sample problem.
Experiments show the efficacy and superior classification performance of our model.
arXiv Detail & Related papers (2023-04-23T10:09:01Z) - Terrain Classification using Transfer Learning on Hyperspectral Images:
A Comparative study [0.13999481573773068]
convolutional neural network (CNN) and the Multi-Layer Perceptron (MLP) have been proven to be an effective method of image classification.
However, they suffer from the issues of long training time and requirement of large amounts of the labeled data.
We propose using the method of transfer learning to decrease the training time and reduce the dependence on large labeled dataset.
arXiv Detail & Related papers (2022-06-19T14:36:33Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Learning from Small Samples: Transformation-Invariant SVMs with
Composition and Locality at Multiple Scales [11.210266084524998]
This paper shows how to incorporate into support-vector machines (SVMs) those properties that have made convolutional neural networks (CNNs) successful.
arXiv Detail & Related papers (2021-09-27T04:02:43Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - Learning Optical Flow from a Few Matches [67.83633948984954]
We show that the dense correlation volume representation is redundant and accurate flow estimation can be achieved with only a fraction of elements in it.
Experiments show that our method can reduce computational cost and memory use significantly, while maintaining high accuracy.
arXiv Detail & Related papers (2021-04-05T21:44:00Z) - On the Texture Bias for Few-Shot CNN Segmentation [21.349705243254423]
Convolutional Neural Networks (CNNs) are driven by shapes to perform visual recognition tasks.
Recent evidence suggests texture bias in CNNs provides higher performing models when learning on large labeled training datasets.
We propose a novel architecture that integrates a set of Difference of Gaussians (DoG) to attenuate high-frequency local components in the feature space.
arXiv Detail & Related papers (2020-03-09T11:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.