Anytime Prediction as a Model of Human Reaction Time
- URL: http://arxiv.org/abs/2011.12859v1
- Date: Wed, 25 Nov 2020 16:30:52 GMT
- Title: Anytime Prediction as a Model of Human Reaction Time
- Authors: Omkar Kumbhar, Elena Sizikova, Najib Majaj, Denis G. Pelli
- Abstract summary: We study the effect of difficulty on human reaction time in a classification network.
We find that the network equivalent input noise SD is 15 times higher than human, and that human efficiency is only 0.6% that of the network.
We conclude that Anytime classification is a promising model for human reaction time in recognition tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks today often recognize objects as well as people do, and thus
might serve as models of the human recognition process. However, most such
networks provide their answer after a fixed computational effort, whereas human
reaction time varies, e.g. from 0.2 to 10 s, depending on the properties of
stimulus and task. To model the effect of difficulty on human reaction time, we
considered a classification network that uses early-exit classifiers to make
anytime predictions. Comparing human and MSDNet accuracy in classifying
CIFAR-10 images in added Gaussian noise, we find that the network equivalent
input noise SD is 15 times higher than human, and that human efficiency is only
0.6\% that of the network. When appropriate amounts of noise are present to
bring the two observers (human and network) into the same accuracy range, they
show very similar dependence on duration or FLOPS, i.e. very similar
speed-accuracy tradeoff. We conclude that Anytime classification (i.e. early
exits) is a promising model for human reaction time in recognition tasks.
Related papers
- How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - MGADN: A Multi-task Graph Anomaly Detection Network for Multivariate
Time Series [0.0]
Anomaly detection of time series, especially multivariate time series(time series with multiple sensors), has been focused on for several years.
Existing method including neural network only concentrate on the relationship in terms of timestamp.
Our approach uses GAT, which is originated from graph neural network, to obtain connection between sensors.
Our approach is also designed to be double headed to calculate both prediction loss and reconstruction loss via VAE(Variational Auto-Encoder).
arXiv Detail & Related papers (2022-11-22T10:17:42Z) - Portuguese Man-of-War Image Classification with Convolutional Neural
Networks [58.720142291102135]
Portuguese man-of-war (PMW) is a gelatinous organism with long tentacles capable of causing severe burns.
This paper reports on the use of convolutional neural networks for recognizing PMW images from the Instagram social media.
arXiv Detail & Related papers (2022-07-04T03:06:45Z) - SATBench: Benchmarking the speed-accuracy tradeoff in object recognition
by humans and dynamic neural networks [0.45438205344305216]
People show a flexible tradeoff between speed and accuracy.
We present the first large-scale dataset of the speed-accuracy tradeoff (SAT) in recognizing ImageNet images.
We compare networks with humans on curve-fit error, category-wise correlation, and curve steepness.
arXiv Detail & Related papers (2022-06-16T20:03:31Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Partial success in closing the gap between human and machine vision [30.78663978510427]
A few years ago, the first CNN surpassed human performance on ImageNet.
Here we ask: Are we making progress in closing the gap between human and machine vision?
We tested human observers on a broad range of out-of-distribution (OOD) datasets.
arXiv Detail & Related papers (2021-06-14T13:23:35Z) - GPRAR: Graph Convolutional Network based Pose Reconstruction and Action
Recognition for Human Trajectory Prediction [1.2891210250935146]
Existing prediction models are easily prone to errors in real-world settings where observations are often noisy.
We introduce GPRAR, a graph convolutional network based pose reconstruction and action recognition for human trajectory prediction.
We show that GPRAR improves the prediction accuracy up to 22% and 50% under noisy observations on JAAD and TITAN datasets.
arXiv Detail & Related papers (2021-03-25T20:12:14Z) - Fast Motion Understanding with Spatiotemporal Neural Networks and
Dynamic Vision Sensors [99.94079901071163]
This paper presents a Dynamic Vision Sensor (DVS) based system for reasoning about high speed motion.
We consider the case of a robot at rest reacting to a small, fast approaching object at speeds higher than 15m/s.
We highlight the results of our system to a toy dart moving at 23.4m/s with a 24.73deg error in $theta$, 18.4mm average discretized radius prediction error, and 25.03% median time to collision prediction error.
arXiv Detail & Related papers (2020-11-18T17:55:07Z) - Probing Predictions on OOD Images via Nearest Categories [97.055916832257]
We study out-of-distribution (OOD) prediction behavior of neural networks when they classify images from unseen classes or corrupted images.
We introduce a new measure, nearest category generalization (NCG), where we compute the fraction of OOD inputs that are classified with the same label as their nearest neighbor in the training set.
We find that robust networks have consistently higher NCG accuracy than natural training, even when the OOD data is much farther away than the robustness radius.
arXiv Detail & Related papers (2020-11-17T07:42:27Z) - Dynamic Time Warping as a New Evaluation for Dst Forecast with Machine
Learning [0.0]
We train a neural network to make a forecast of the disturbance storm time index at origin time $t$ with a forecasting horizon of 1 up to 6 hours.
Inspection of the model's results with the correlation coefficient and RMSE indicated a performance comparable to the latest publications.
A new method is proposed to measure whether two time series are shifted in time with respect to each other.
arXiv Detail & Related papers (2020-06-08T15:14:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.