Applying convolutional neural networks to extremely sparse image
datasets using an image subdivision approach
- URL: http://arxiv.org/abs/2010.13054v1
- Date: Sun, 25 Oct 2020 07:43:20 GMT
- Title: Applying convolutional neural networks to extremely sparse image
datasets using an image subdivision approach
- Authors: Johan P. Boetker
- Abstract summary: The aim of this work is to demonstrate that convolutional neural networks (CNN) can be applied to extremely sparse image libraries by subdivision of the original image datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: The aim of this work is to demonstrate that convolutional neural
networks (CNN) can be applied to extremely sparse image libraries by
subdivision of the original image datasets. Methods: Image datasets from a
conventional digital camera was created and scanning electron microscopy (SEM)
measurements were obtained from the literature. The image datasets were
subdivided and CNN models were trained on parts of the subdivided datasets.
Results: The CNN models were capable of analyzing extremely sparse image
datasets by utilizing the proposed method of image subdivision. It was
furthermore possible to provide a direct assessment of the various regions
where a given API or appearance was predominant.
Related papers
- LM-IGTD: a 2D image generator for low-dimensional and mixed-type tabular data to leverage the potential of convolutional neural networks [0.0]
Convolutional neural networks (CNNs) have been successfully used in many applications where important information about data is embedded in the order of features (images)
We present a novel and effective approach for transforming tabular data into images, addressing the inherent limitations associated with low-dimensional and mixed-type datasets.
A mapping between original features and the generated images is established, and post hoc interpretability methods are employed to identify crucial areas of these images.
arXiv Detail & Related papers (2024-04-26T09:52:39Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - Classification of EEG Motor Imagery Using Deep Learning for
Brain-Computer Interface Systems [79.58173794910631]
A trained T1 class Convolutional Neural Network (CNN) model will be used to examine its ability to successfully identify motor imagery.
In theory, and if the model has been trained accurately, it should be able to identify a class and label it accordingly.
The CNN model will then be restored and used to try and identify the same class of motor imagery data using much smaller sampled data.
arXiv Detail & Related papers (2022-05-31T17:09:46Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Segmentation of Roads in Satellite Images using specially modified U-Net
CNNs [0.0]
The aim of this paper is to build an image classifier for satellite images of urban scenes that identifies the portions of the images in which a road is located.
Unlike conventional computer vision algorithms, convolutional neural networks (CNNs) provide accurate and reliable results on this task.
arXiv Detail & Related papers (2021-09-29T19:08:32Z) - VisGraphNet: a complex network interpretation of convolutional neural
features [6.50413414010073]
We propose and investigate the use of visibility graphs to model the feature map of a neural network.
The work is motivated by an alternative viewpoint provided by these graphs over the original data.
arXiv Detail & Related papers (2021-08-27T20:21:04Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - From ImageNet to Image Classification: Contextualizing Progress on
Benchmarks [99.19183528305598]
We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset.
Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for.
arXiv Detail & Related papers (2020-05-22T17:39:16Z) - Sparse data to structured imageset transformation [0.0]
Machine learning problems involving sparse datasets may benefit from the use of convolutional neural networks if the numbers of samples and features are very large.
We convert such datasets to imagesets while attempting to give each image structure that is amenable for use with convolutional neural networks.
arXiv Detail & Related papers (2020-05-07T20:36:59Z) - CRNet: Cross-Reference Networks for Few-Shot Segmentation [59.85183776573642]
Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images.
With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images.
Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.
arXiv Detail & Related papers (2020-03-24T04:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.