Extracting Fast and Slow: User-Action Embedding with Inter-temporal
Information
- URL: http://arxiv.org/abs/2206.09535v1
- Date: Mon, 20 Jun 2022 02:04:04 GMT
- Title: Extracting Fast and Slow: User-Action Embedding with Inter-temporal
Information
- Authors: Akira Matsui, Emilio Ferrara
- Abstract summary: We propose a method that analyzes user actions with intertemporal information (time interval)
We embed the user's action sequence and its time intervals to obtain a low-dimensional representation of the action along with intertemporal information.
This paper demonstrates that explicit modeling of action sequences and inter-temporal user behavior information enable successful interpretable analysis.
- Score: 8.697025191437774
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the recent development of technology, data on detailed human temporal
behaviors has become available. Many methods have been proposed to mine those
human dynamic behavior data and revealed valuable insights for research and
businesses. However, most methods analyze only sequence of actions and do not
study the inter-temporal information such as the time intervals between actions
in a holistic manner. While actions and action time intervals are
interdependent, it is challenging to integrate them because they have different
natures: time and action. To overcome this challenge, we propose a unified
method that analyzes user actions with intertemporal information (time
interval). We simultaneously embed the user's action sequence and its time
intervals to obtain a low-dimensional representation of the action along with
intertemporal information. The paper demonstrates that the proposed method
enables us to characterize user actions in terms of temporal context, using
three real-world data sets. This paper demonstrates that explicit modeling of
action sequences and inter-temporal user behavior information enable successful
interpretable analysis.
Related papers
- Interactive Counterfactual Generation for Univariate Time Series [7.331969743532515]
Our approach aims to enhance the transparency and understanding of deep learning models' decision processes.
By abstracting user interactions with the projected data points, our method facilitates an intuitive generation of counterfactual explanations.
We validate this method using the ECG5000 benchmark dataset, demonstrating significant improvements in interpretability and user understanding of time series classification.
arXiv Detail & Related papers (2024-08-20T08:19:55Z) - TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Continuous Human Action Recognition for Human-Machine Interaction: A
Review [39.593687054839265]
Recognising actions within an input video are challenging but necessary tasks for applications that require real-time human-machine interaction.
We provide on the feature extraction and learning strategies that are used on most state-of-the-art methods.
We investigate the application of such models to real-world scenarios and discuss several limitations and key research directions.
arXiv Detail & Related papers (2022-02-26T09:25:44Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Spatio-Temporal Context for Action Detection [2.294635424666456]
This work proposes to use non-aggregated temporal information.
The main contribution is the introduction of two cross attention blocks.
Experiments on the AVA dataset show the advantages of the proposed approach.
arXiv Detail & Related papers (2021-06-29T08:33:48Z) - Exploring Temporal Context and Human Movement Dynamics for Online Action
Detection in Videos [32.88517041655816]
Temporal context and human movement dynamics can be effectively employed for online action detection.
Our approach uses various state-of-the-art architectures and appropriately combines the extracted features in order to improve action detection.
arXiv Detail & Related papers (2021-06-26T08:34:19Z) - Information Interaction Profile of Choice Adoption [2.9972063833424216]
We introduce an efficient method to infer the entities interaction network and its evolution according to the temporal distance separating interacting entities.
The interaction profile allows characterizing the mechanisms of the interaction processes.
We show that the effect of a combination of exposures on a user is more than the sum of each exposure's independent effect--there is an interaction.
arXiv Detail & Related papers (2021-04-28T10:42:25Z) - Intra- and Inter-Action Understanding via Temporal Action Parsing [118.32912239230272]
We construct a new dataset developed on sport videos with manual annotations of sub-actions, and conduct a study on temporal action parsing on top.
Our study shows that a sport activity usually consists of multiple sub-actions and that the awareness of such temporal structures is beneficial to action recognition.
We also investigate a number of temporal parsing methods, and thereon devise an improved method that is capable of mining sub-actions from training data without knowing the labels of them.
arXiv Detail & Related papers (2020-05-20T17:45:18Z) - Inferring Temporal Compositions of Actions Using Probabilistic Automata [61.09176771931052]
We propose to express temporal compositions of actions as semantic regular expressions and derive an inference framework using probabilistic automata.
Our approach is different from existing works that either predict long-range complex activities as unordered sets of atomic actions, or retrieve videos using natural language sentences.
arXiv Detail & Related papers (2020-04-28T00:15:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.