Analogous Process Structure Induction for Sub-event Sequence Prediction
- URL: http://arxiv.org/abs/2010.08525v1
- Date: Fri, 16 Oct 2020 17:35:40 GMT
- Title: Analogous Process Structure Induction for Sub-event Sequence Prediction
- Authors: Hongming Zhang, Muhao Chen, Haoyu Wang, Yangqiu Song, Dan Roth
- Abstract summary: We propose an Analogous Process Structure Induction APSI framework to predict the whole sub-event sequence of previously unseen processes.
As our experiments and analysis indicate, APSI supports the generation of meaningful sub-event sequences for unseen processes and can help predict missing events.
- Score: 111.10887596684276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computational and cognitive studies of event understanding suggest that
identifying, comprehending, and predicting events depend on having structured
representations of a sequence of events and on conceptualizing (abstracting)
its components into (soft) event categories. Thus, knowledge about a known
process such as "buying a car" can be used in the context of a new but
analogous process such as "buying a house". Nevertheless, most event
understanding work in NLP is still at the ground level and does not consider
abstraction. In this paper, we propose an Analogous Process Structure Induction
APSI framework, which leverages analogies among processes and conceptualization
of sub-event instances to predict the whole sub-event sequence of previously
unseen open-domain processes. As our experiments and analysis indicate, APSI
supports the generation of meaningful sub-event sequences for unseen processes
and can help predict missing events.
Related papers
- Pretext Training Algorithms for Event Sequence Data [29.70078362944441]
This paper proposes a self-supervised pretext training framework tailored to event sequence data.
Our pretext tasks unlock foundational representations that are generalizable across different down-stream tasks.
arXiv Detail & Related papers (2024-02-16T01:25:21Z) - Distilling Event Sequence Knowledge From Large Language Models [17.105913216452738]
Event sequence models have been found to be highly effective in the analysis and prediction of events.
We use Large Language Models to generate event sequences that can effectively be used for probabilistic event model construction.
We show that our approach can generate high-quality event sequences, filling a knowledge gap in the input KG.
arXiv Detail & Related papers (2024-01-14T09:34:42Z) - A Reversible Perspective on Petri Nets and Event Structures [0.0]
Event structures have emerged as a foundational model for concurrent computation.
Event structures have been extended to address reversibility, where processes can undo previous computations.
We introduce a subset of contextual Petri nets, dubbed reversible causal nets, that precisely correspond to reversible prime event structures.
arXiv Detail & Related papers (2023-12-27T20:47:48Z) - Accessing and Interpreting OPC UA Event Traces based on Semantic Process
Descriptions [69.9674326582747]
This paper proposes an approach to access a production systems' event data based on the event data's context.
The approach extracts filtered event logs from a database system by combining: 1) a semantic model of a production system's hierarchical structure, 2) a formalized process description and 3) an OPC UA information model.
arXiv Detail & Related papers (2022-07-25T15:13:44Z) - FineDiving: A Fine-grained Dataset for Procedure-aware Action Quality
Assessment [93.09267863425492]
We argue that understanding both high-level semantics and internal temporal structures of actions in competitive sports videos is the key to making predictions accurate and interpretable.
We construct a new fine-grained dataset, called FineDiving, developed on diverse diving events with detailed annotations on action procedures.
arXiv Detail & Related papers (2022-04-07T17:59:32Z) - Learning Constraints and Descriptive Segmentation for Subevent Detection [74.48201657623218]
We propose an approach to learning and enforcing constraints that capture dependencies between subevent detection and EventSeg prediction.
We adopt Rectifier Networks for constraint learning and then convert the learned constraints to a regularization term in the loss function of the neural model.
arXiv Detail & Related papers (2021-09-13T20:50:37Z) - Interpreting Process Predictions using a Milestone-Aware Counterfactual
Approach [0.0]
We explore the use of a popular model-agnostic counterfactual algorithm, DiCE, in the context of predictive process analytics.
The analysis reveals that the algorithm is limited when being applied to derive explanations of process predictions.
We propose an approach that supports deriving milestone-aware counterfactuals at different stages of a trace to promote interpretability.
arXiv Detail & Related papers (2021-07-19T09:14:16Z) - Online Learning Probabilistic Event Calculus Theories in Answer Set
Programming [70.06301658267125]
Event Recognition (CER) systems detect occurrences in streaming time-stamped datasets using predefined event patterns.
We present a system based on Answer Set Programming (ASP), capable of probabilistic reasoning with complex event patterns in the form of rules weighted in the Event Calculus.
Our results demonstrate the superiority of our novel approach, both terms efficiency and predictive.
arXiv Detail & Related papers (2021-03-31T23:16:29Z) - "What Are You Trying to Do?" Semantic Typing of Event Processes [94.3499255880101]
This paper studies a new cognitively motivated semantic typing task, multi-axis event process typing.
We develop a large dataset containing over 60k event processes, featuring ultra fine-grained typing on both the action and object type axes.
We propose a hybrid learning framework, P2GT, which addresses the challenging typing problem with indirect supervision from glosses1and a joint learning-to-rank framework.
arXiv Detail & Related papers (2020-10-13T22:37:29Z) - Using Sampling Strategy to Assist Consensus Sequence Analysis [3.983901161231557]
We propose a novel sampling strategy to determine the number of traces necessary to produce a representative consensus sequence.
We show how to estimate the difference between the predefined Expert Model and the real processes carried out.
arXiv Detail & Related papers (2020-08-19T07:12:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.