Predictive Event Segmentation and Representation with Neural Networks: A
Self-Supervised Model Assessed by Psychological Experiments
- URL: http://arxiv.org/abs/2210.05710v1
- Date: Tue, 4 Oct 2022 14:14:30 GMT
- Title: Predictive Event Segmentation and Representation with Neural Networks: A
Self-Supervised Model Assessed by Psychological Experiments
- Authors: Hamit Basgol, Inci Ayhan, Emre Ugur
- Abstract summary: We introduce a self-supervised model of event segmentation.
Our model consists of neural networks that predict the sensory signal in the next time-step to represent different events.
We show that our model that tracks the prediction error signals can produce human-like event boundaries and event representations.
- Score: 2.223733768286313
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: People segment complex, ever-changing and continuous experience into basic,
stable and discrete spatio-temporal experience units, called events. Event
segmentation literature investigates the mechanisms that allow people to
extract events. Event segmentation theory points out that people predict
ongoing activities and observe prediction error signals to find event
boundaries that keep events apart. In this study, we investigated the mechanism
giving rise to this ability by a computational model and accompanying
psychological experiments. Inspired from event segmentation theory and
predictive processing, we introduced a self-supervised model of event
segmentation. This model consists of neural networks that predict the sensory
signal in the next time-step to represent different events, and a cognitive
model that regulates these networks on the basis of their prediction errors. In
order to verify the ability of our model in segmenting events, learning them
during passive observation, and representing them in its internal
representational space, we prepared a video that depicts human behaviors
represented by point-light displays. We compared event segmentation behaviors
of participants and our model with this video in two hierarchical event
segmentation levels. By using point-biserial correlation technique, we
demonstrated that event segmentation decisions of our model correlated with the
responses of participants. Moreover, by approximating representation space of
participants by a similarity-based technique, we showed that our model formed a
similar representation space with those of participants. The result suggests
that our model that tracks the prediction error signals can produce human-like
event boundaries and event representations. Finally, we discussed our
contribution to the literature of event cognition and our understanding of how
event segmentation is implemented in the brain.
Related papers
- Event Segmentation Applications in Large Language Model Enabled Automated Recall Assessments [0.0]
Event segmentation is central to how we perceive, encode, and recall experiences.
Current research methodologies rely heavily on human for assessing segmentation patterns and recall ability.
We leverage Large Language Models (LLMs) to automate event segmentation and assess recall.
arXiv Detail & Related papers (2025-02-19T00:48:51Z) - Interpretable Neural Temporal Point Processes for Modelling Electronic Health Records [0.0]
We propose an interpretable framework inf2vec for event sequence modelling, where the event influences are directly parameterized and can be learned end-to-end.
In the experiment, we demonstrate the superiority of our model on event prediction as well as type-type influences learning.
arXiv Detail & Related papers (2024-04-09T12:37:41Z) - Improving Event Definition Following For Zero-Shot Event Detection [66.27883872707523]
Existing approaches on zero-shot event detection usually train models on datasets annotated with known event types.
We aim to improve zero-shot event detection by training models to better follow event definitions.
arXiv Detail & Related papers (2024-03-05T01:46:50Z) - Enhancing Asynchronous Time Series Forecasting with Contrastive
Relational Inference [21.51753838306655]
Temporal point processes(TPPs) are the standard method for modeling such.
Existing TPP models have focused on the conditional distribution of future events instead of explicitly modeling event interactions, imposing challenges for event predictions.
We propose a novel approach that leverages a Neural Inference (NRI) to learn a graph that infers interactions while simultaneously learning dynamics patterns from observational data.
arXiv Detail & Related papers (2023-09-06T09:47:03Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Towards Out-of-Distribution Sequential Event Prediction: A Causal
Treatment [72.50906475214457]
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events.
In practice, the next-event prediction models are trained with sequential data collected at one time.
We propose a framework with hierarchical branching structures for learning context-specific representations.
arXiv Detail & Related papers (2022-10-24T07:54:13Z) - Modeling Continuous Time Sequences with Intermittent Observations using
Marked Temporal Point Processes [25.074394338483575]
A large fraction of data generated via human activities can be represented as a sequence of events over a continuous-time.
Deep learning models over these continuous-time event sequences is a non-trivial task.
In this work, we provide a novel unsupervised model and inference method for learning MTPP in presence of event sequences with missing events.
arXiv Detail & Related papers (2022-06-23T18:23:20Z) - Variational Neural Temporal Point Process [22.396329275957996]
A temporal point process is a process that predicts which type of events is likely to happen and when the event will occur.
We introduce the inference and the generative networks, and train a distribution of latent variable to deal with property on deep neural network.
We empirically demonstrate that our model can generalize the representations of various event types.
arXiv Detail & Related papers (2022-02-17T13:34:30Z) - Multi-level Motion Attention for Human Motion Prediction [132.29963836262394]
We study the use of different types of attention, computed at joint, body part, and full pose levels.
Our experiments on Human3.6M, AMASS and 3DPW validate the benefits of our approach for both periodical and non-periodical actions.
arXiv Detail & Related papers (2021-06-17T08:08:11Z) - Hawkes Processes on Graphons [85.6759041284472]
We study Hawkes processes and their variants that are associated with Granger causality graphs.
We can generate the corresponding Hawkes processes and simulate event sequences.
We learn the proposed model by minimizing the hierarchical optimal transport distance between the generated event sequences and the observed ones.
arXiv Detail & Related papers (2021-02-04T17:09:50Z) - Noisy Agents: Self-supervised Exploration by Predicting Auditory Events [127.82594819117753]
We propose a novel type of intrinsic motivation for Reinforcement Learning (RL) that encourages the agent to understand the causal effect of its actions.
We train a neural network to predict the auditory events and use the prediction errors as intrinsic rewards to guide RL exploration.
Experimental results on Atari games show that our new intrinsic motivation significantly outperforms several state-of-the-art baselines.
arXiv Detail & Related papers (2020-07-27T17:59:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.