Dataset for eye-tracking tasks
- URL: http://arxiv.org/abs/2106.07554v1
- Date: Tue, 1 Jun 2021 23:54:23 GMT
- Title: Dataset for eye-tracking tasks
- Authors: R. Ildar
- Abstract summary: We present a dataset that is suitable for training custom models of convolutional neural networks for eye-tracking tasks.
This dataset contains 10,000 eye images in an extension of 416 by 416 pixels.
This manuscript can be considered as a guide for the preparation of datasets for eye-tracking devices.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In recent years many different deep neural networks were developed, but due
to a large number of layers in deep networks, their training requires a long
time and a large number of datasets. Today is popular to use trained deep
neural networks for various tasks, even for simple ones in which such deep
networks are not required. The well-known deep networks such as YoloV3, SSD,
etc. are intended for tracking and monitoring various objects, therefore their
weights are heavy and the overall accuracy for a specific task is low.
Eye-tracking tasks need to detect only one object - an iris in a given area.
Therefore, it is logical to use a neural network only for this task. But the
problem is the lack of suitable datasets for training the model. In the
manuscript, we presented a dataset that is suitable for training custom models
of convolutional neural networks for eye-tracking tasks. Using data set data,
each user can independently pre-train the convolutional neural network models
for eye-tracking tasks. This dataset contains annotated 10,000 eye images in an
extension of 416 by 416 pixels. The table with annotation information shows the
coordinates and radius of the eye for each image. This manuscript can be
considered as a guide for the preparation of datasets for eye-tracking devices
Related papers
- Diffused Redundancy in Pre-trained Representations [98.55546694886819]
We take a closer look at how features are encoded in pre-trained representations.
We find that learned representations in a given layer exhibit a degree of diffuse redundancy.
Our findings shed light on the nature of representations learned by pre-trained deep neural networks.
arXiv Detail & Related papers (2023-05-31T21:00:50Z) - Dimensionality of datasets in object detection networks [0.5801044612920815]
convolutional neural networks (CNNs) are used in a large number of tasks in computer vision.
One of them is object detection for autonomous driving.
Our goal is to determine the effect of Intrinsic dimension (i.e. minimum number of parameters required to represent data) in different layers on the accuracy of object detection network for augmented data sets.
arXiv Detail & Related papers (2022-10-13T14:19:16Z) - Backbones-Review: Feature Extraction Networks for Deep Learning and Deep
Reinforcement Learning Approaches [3.255610188565679]
CNNs allow to work on large-scale size of data, as well as cover different scenarios for a specific task.
Many networks have been proposed and become the famous networks used for any DL models in any AI task.
A backbone is a known network trained in many other tasks before and demonstrates its effectiveness.
arXiv Detail & Related papers (2022-06-16T09:18:34Z) - Using a Cross-Task Grid of Linear Probes to Interpret CNN Model
Predictions On Retinal Images [3.5789352263336847]
We analyze a dataset of retinal images using linear probes: linear regression models trained on some "target" task, using embeddings from a deep convolutional (CNN) model trained on some "source" task as input.
We use this method across all possible pairings of 93 tasks in the UK Biobank dataset of retinal images, leading to 164k different models.
arXiv Detail & Related papers (2021-07-23T21:30:27Z) - DeepSatData: Building large scale datasets of satellite images for
training machine learning models [77.17638664503215]
This report presents design considerations for automatically generating satellite imagery datasets for training machine learning models.
We discuss issues faced from the point of view of deep neural network training and evaluation.
arXiv Detail & Related papers (2021-04-28T15:13:12Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Semi-supervised deep learning based on label propagation in a 2D
embedded space [117.9296191012968]
Proposed solutions propagate labels from a small set of supervised images to a large set of unsupervised ones to train a deep neural network model.
We present a loop in which a deep neural network (VGG-16) is trained from a set with more correctly labeled samples along iterations.
As the labeled set improves along iterations, it improves the features of the neural network.
arXiv Detail & Related papers (2020-08-02T20:08:54Z) - Predicting Neural Network Accuracy from Weights [25.73213712719546]
We show experimentally that the accuracy of a trained neural network can be predicted surprisingly well by looking only at its weights.
We release a collection of 120k convolutional neural networks trained on four different datasets to encourage further research in this area.
arXiv Detail & Related papers (2020-02-26T13:06:14Z) - Analyzing Neural Networks Based on Random Graphs [77.34726150561087]
We perform a massive evaluation of neural networks with architectures corresponding to random graphs of various types.
We find that none of the classical numerical graph invariants by itself allows to single out the best networks.
We also find that networks with primarily short-range connections perform better than networks which allow for many long-range connections.
arXiv Detail & Related papers (2020-02-19T11:04:49Z) - Neural Data Server: A Large-Scale Search Engine for Transfer Learning
Data [78.74367441804183]
We introduce Neural Data Server (NDS), a large-scale search engine for finding the most useful transfer learning data to the target domain.
NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client.
We show the effectiveness of NDS in various transfer learning scenarios, demonstrating state-of-the-art performance on several target datasets.
arXiv Detail & Related papers (2020-01-09T01:21:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.