Sparse data to structured imageset transformation
- URL: http://arxiv.org/abs/2005.10045v1
- Date: Thu, 7 May 2020 20:36:59 GMT
- Title: Sparse data to structured imageset transformation
- Authors: Baris Kanber
- Abstract summary: Machine learning problems involving sparse datasets may benefit from the use of convolutional neural networks if the numbers of samples and features are very large.
We convert such datasets to imagesets while attempting to give each image structure that is amenable for use with convolutional neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning problems involving sparse datasets may benefit from the use
of convolutional neural networks if the numbers of samples and features are
very large. Such datasets are increasingly more frequently encountered in a
variety of different domains. We convert such datasets to imagesets while
attempting to give each image structure that is amenable for use with
convolutional neural networks. Experimental results on two publicly available,
sparse datasets show that the approach can boost classification performance
compared to other methods, which may be attributed to the formation of visually
distinguishable shapes on the resultant images.
Related papers
- Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - PromptMix: Text-to-image diffusion models enhance the performance of
lightweight networks [83.08625720856445]
Deep learning tasks require annotations that are too time consuming for human operators.
In this paper, we introduce PromptMix, a method for artificially boosting the size of existing datasets.
We show that PromptMix can significantly increase the performance of lightweight networks by up to 26%.
arXiv Detail & Related papers (2023-01-30T14:15:47Z) - A new dataset for measuring the performance of blood vessel segmentation methods under distribution shifts [0.0]
VessMAP is a heterogeneous blood vessel segmentation dataset acquired by carefully sampling relevant images from a larger non-annotated dataset.
A methodology was developed to select both prototypical and atypical samples from the base dataset.
To demonstrate the potential of the new dataset, we show that the validation performance of a neural network changes significantly depending on the splits used for training the network.
arXiv Detail & Related papers (2023-01-11T15:31:15Z) - Multilayer deep feature extraction for visual texture recognition [0.0]
This paper is focused on improving the accuracy of convolutional neural networks in texture classification.
It is done by extracting features from multiple convolutional layers of a pretrained neural network and aggregating such features using Fisher vector.
We verify the effectiveness of our method on texture classification of benchmark datasets, as well as on a practical task of Brazilian plant species identification.
arXiv Detail & Related papers (2022-08-22T03:53:43Z) - Prefix Conditioning Unifies Language and Label Supervision [84.11127588805138]
We show that dataset biases negatively affect pre-training by reducing the generalizability of learned representations.
In experiments, we show that this simple technique improves the performance in zero-shot image recognition accuracy and robustness to the image-level distribution shift.
arXiv Detail & Related papers (2022-06-02T16:12:26Z) - EllSeg-Gen, towards Domain Generalization for head-mounted eyetracking [19.913297057204357]
We show that convolutional networks excel at extracting gaze features despite the presence of such artifacts.
We compare the performance of a single model trained with multiple datasets against a pool of models trained on individual datasets.
Results indicate that models tested on datasets in which eye images exhibit higher appearance variability benefit from multiset training.
arXiv Detail & Related papers (2022-05-04T08:35:52Z) - Feature transforms for image data augmentation [74.12025519234153]
In image classification, many augmentation approaches utilize simple image manipulation algorithms.
In this work, we build ensembles on the data level by adding images generated by combining fourteen augmentation approaches.
Pretrained ResNet50 networks are finetuned on training sets that include images derived from each augmentation method.
arXiv Detail & Related papers (2022-01-24T14:12:29Z) - Multi-dataset Pretraining: A Unified Model for Semantic Segmentation [97.61605021985062]
We propose a unified framework, termed as Multi-Dataset Pretraining, to take full advantage of the fragmented annotations of different datasets.
This is achieved by first pretraining the network via the proposed pixel-to-prototype contrastive loss over multiple datasets.
In order to better model the relationship among images and classes from different datasets, we extend the pixel level embeddings via cross dataset mixing.
arXiv Detail & Related papers (2021-06-08T06:13:11Z) - Applying convolutional neural networks to extremely sparse image
datasets using an image subdivision approach [0.0]
The aim of this work is to demonstrate that convolutional neural networks (CNN) can be applied to extremely sparse image libraries by subdivision of the original image datasets.
arXiv Detail & Related papers (2020-10-25T07:43:20Z) - Robust and Generalizable Visual Representation Learning via Random
Convolutions [44.62476686073595]
We show that the robustness of neural networks can be greatly improved through the use of random convolutions as data augmentation.
Our method can benefit downstream tasks by providing a more robust pretrained visual representation.
arXiv Detail & Related papers (2020-07-25T19:52:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.