Predicting change in time production -- A machine learning approach to time perception
- URL: http://arxiv.org/abs/2412.12781v1
- Date: Tue, 17 Dec 2024 10:41:19 GMT
- Title: Predicting change in time production -- A machine learning approach to time perception
- Authors: Amrapali Pednekar, Alvaro Garrido, Yara Khaluf, Pieter Simoens,
- Abstract summary: We train a machine learning model to predict the direction of change in an individual's time production.<n>We find that a participant's previous timing performance played a significant role in determining the direction of change in time production.
- Score: 1.3091548163229116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time perception research has advanced significantly over the years. However, some areas remain largely unexplored. This study addresses two such under-explored areas in timing research: (1) A quantitative analysis of time perception at an individual level, and (2) Time perception in an ecological setting. In this context, we trained a machine learning model to predict the direction of change in an individual's time production. The model's training data was collected using an ecologically valid setup. We moved closer to an ecological setting by conducting an online experiment with 995 participants performing a time production task that used naturalistic videos (no audio) as stimuli. The model achieved an accuracy of 61%. This was 10 percentage points higher than the baseline models derived from cognitive theories of timing. The model performed equally well on new data from a second experiment, providing evidence of its generalization capabilities. The model's output analysis revealed that it also contained information about the magnitude of change in time production. The predictions were further analysed at both population and individual level. It was found that a participant's previous timing performance played a significant role in determining the direction of change in time production. By integrating attentional-gate theories from timing research with feature importance techniques from machine learning, we explained model predictions using cognitive theories of timing. The model and findings from this study have potential applications in systems involving human-computer interactions where understanding and predicting changes in user's time perception can enable better user experience and task performance.
Related papers
- Temporal receptive field in dynamic graph learning: A comprehensive analysis [15.161255747900968]
We present a comprehensive analysis of the temporal receptive field in dynamic graph learning.
Our results demonstrate that appropriately chosen temporal receptive field can significantly enhance model performance.
For some models, overly large windows may introduce noise and reduce accuracy.
arXiv Detail & Related papers (2024-07-17T07:46:53Z) - A Mathematical Model of the Hidden Feedback Loop Effect in Machine Learning Systems [44.99833362998488]
We introduce a repeated learning process to jointly describe several phenomena attributed to unintended hidden feedback loops.
A distinctive feature of such repeated learning setting is that the state of the environment becomes causally dependent on the learner itself over time.
We present a novel dynamical systems model of the repeated learning process and prove the limiting set of probability distributions for positive and negative feedback loop modes.
arXiv Detail & Related papers (2024-05-04T17:57:24Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - TimeGPT-1 [1.2289361708127877]
We introduce TimeGPT, the first foundation model for time series, capable of generating accurate predictions for diverse datasets not seen during training.
We evaluate our pre-trained model against established statistical, machine learning, and deep learning methods, demonstrating that TimeGPT zero-shot inference excels in performance, efficiency, and simplicity.
arXiv Detail & Related papers (2023-10-05T15:14:00Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - EAMDrift: An interpretable self retrain model for time series [0.0]
We present EAMDrift, a novel method that combines forecasts from multiple individual predictors by weighting each prediction according to a performance metric.
EAMDrift is designed to automatically adapt to out-of-distribution patterns in data and identify the most appropriate models to use at each moment.
Our study on real-world datasets shows that EAMDrift outperforms individual baseline models by 20% and achieves comparable accuracy results to non-interpretable ensemble models.
arXiv Detail & Related papers (2023-05-31T13:25:26Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Temporal Effects on Pre-trained Models for Language Processing Tasks [9.819970078135343]
We present a set of experiments with systems powered by large neural pretrained representations for English to demonstrate that em temporal model deterioration is not as big a concern.
It is however the case that em temporal domain adaptation is beneficial, with better performance for a given time period possible when the system is trained on temporally more recent data.
arXiv Detail & Related papers (2021-11-24T20:44:12Z) - STAR: Sparse Transformer-based Action Recognition [61.490243467748314]
This work proposes a novel skeleton-based human action recognition model with sparse attention on the spatial dimension and segmented linear attention on the temporal dimension of data.
Experiments show that our model can achieve comparable performance while utilizing much less trainable parameters and achieve high speed in training and inference.
arXiv Detail & Related papers (2021-07-15T02:53:11Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Causal Discovery in Physical Systems from Videos [123.79211190669821]
Causal discovery is at the core of human cognition.
We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure.
arXiv Detail & Related papers (2020-07-01T17:29:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.