Representation and Processing of Instantaneous and Durative Temporal
Phenomena
- URL: http://arxiv.org/abs/2108.13365v1
- Date: Fri, 27 Aug 2021 11:28:06 GMT
- Title: Representation and Processing of Instantaneous and Durative Temporal
Phenomena
- Authors: Manolis Pitsikalis, Alexei Lisitsa and Shan Luo
- Abstract summary: Event definitions in Complex Event Processing are constrained by the expressiveness of each system's language.
We propose a new logic based temporal phenomena definition language, specifically tailored for Complex Event Processing.
- Score: 14.501997665147234
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event definitions in Complex Event Processing systems are constrained by the
expressiveness of each system's language. Some systems allow the definition of
instantaneous complex events, while others allow the definition of durative
complex events. While there are exceptions that offer both options, they often
lack of intervals relations such as those specified by the Allen's interval
algebra. In this paper, we propose a new logic based temporal phenomena
definition language, specifically tailored for Complex Event Processing, that
allows the representation of both instantaneous and durative phenomena and the
temporal relations between them. Moreover, we demonstrate the expressiveness of
our proposed language by employing a maritime use case where we define maritime
events of interest. Finally, we analyse the execution semantics of our proposed
language for stream processing and introduce the `Phenesthe' implementation
prototype.
Related papers
- An Interleaving Semantics of the Timed Concurrent Language for
Argumentation to Model Debates and Dialogue Games [0.0]
We propose a language for modelling concurrent interaction between agents.
Such a language exploits a shared memory used by the agents to communicate and reason on the acceptability of their beliefs.
We show how it can be used to model interactions such as debates and dialogue games taking place between intelligent agents.
arXiv Detail & Related papers (2023-06-13T10:41:28Z) - Fuzzy Temporal Protoforms for the Quantitative Description of Processes
in Natural Language [0.0]
The model includes temporal and causal information from processes and attributes, quantifies attributes in time during the process life-span and recalls causal relations and temporal distances between events.
A real use-case in the cardiology domain is presented, showing the potential of our model for providing natural language explanations addressed to domain experts.
arXiv Detail & Related papers (2023-05-16T14:59:38Z) - Plurality and Quantification in Graph Representation of Meaning [4.82512586077023]
Our graph language covers the essentials of natural language semantics using only monadic second-order variables.
We present a unification-based mechanism for constructing semantic graphs at a simple syntax-semantics interface.
The present graph formalism is applied to linguistic issues in distributive predication, cross-categorial conjunction, and scope permutation of quantificational expressions.
arXiv Detail & Related papers (2021-12-13T07:04:41Z) - Context-Dependent Semantic Parsing for Temporal Relation Extraction [2.5807659587068534]
We propose SMARTER, a neural semantic representation, to extract temporal information in text effectively.
In the inference phase, SMARTER generates a temporal relation graph by executing the logical form.
The accurate logical form representations of an event given context ensure the correctness of the extracted relations.
arXiv Detail & Related papers (2021-12-02T00:29:21Z) - Extracting Event Temporal Relations via Hyperbolic Geometry [18.068466562913923]
We introduce two approaches to encode events and their temporal relations in hyperbolic spaces.
One approach leverages hyperbolic embeddings to directly infer event relations through simple geometrical operations.
In the second one, we devise an end-to-end architecture composed of hyperbolic neural units tailored for the temporal relation extraction task.
arXiv Detail & Related papers (2021-09-12T14:40:13Z) - Temporal and Object Quantification Networks [95.64650820186706]
We present a new class of neuro-symbolic networks with a structural bias that enables them to learn to recognize complex relational-temporal events.
We demonstrate that TOQ-Nets can generalize from small amounts of data to scenarios containing more objects than were present during training and to temporal warpings of input sequences.
arXiv Detail & Related papers (2021-06-10T16:18:21Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Pairwise Representation Learning for Event Coreference [73.10563168692667]
We develop a Pairwise Representation Learning (PairwiseRL) scheme for the event mention pairs.
Our representation supports a finer, structured representation of the text snippet to facilitate encoding events and their arguments.
We show that PairwiseRL, despite its simplicity, outperforms the prior state-of-the-art event coreference systems on both cross-document and within-document event coreference benchmarks.
arXiv Detail & Related papers (2020-10-24T06:55:52Z) - Joint Constrained Learning for Event-Event Relation Extraction [94.3499255880101]
We propose a joint constrained learning framework for modeling event-event relations.
Specifically, the framework enforces logical constraints within and across multiple temporal and subevent relations.
We show that our joint constrained learning approach effectively compensates for the lack of jointly labeled data.
arXiv Detail & Related papers (2020-10-13T22:45:28Z) - Language Guided Networks for Cross-modal Moment Retrieval [66.49445903955777]
Cross-modal moment retrieval aims to localize a temporal segment from an untrimmed video described by a natural language query.
Existing methods independently extract the features of videos and sentences.
We present Language Guided Networks (LGN), a new framework that leverages the sentence embedding to guide the whole process of moment retrieval.
arXiv Detail & Related papers (2020-06-18T12:08:40Z) - Inferring Temporal Compositions of Actions Using Probabilistic Automata [61.09176771931052]
We propose to express temporal compositions of actions as semantic regular expressions and derive an inference framework using probabilistic automata.
Our approach is different from existing works that either predict long-range complex activities as unordered sets of atomic actions, or retrieve videos using natural language sentences.
arXiv Detail & Related papers (2020-04-28T00:15:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.