Online Learning Probabilistic Event Calculus Theories in Answer Set
Programming
- URL: http://arxiv.org/abs/2104.00158v1
- Date: Wed, 31 Mar 2021 23:16:29 GMT
- Title: Online Learning Probabilistic Event Calculus Theories in Answer Set
Programming
- Authors: Nikos Katzouris, Alexander Artikis and Georgios Paliouras
- Abstract summary: Event Recognition (CER) systems detect occurrences in streaming time-stamped datasets using predefined event patterns.
We present a system based on Answer Set Programming (ASP), capable of probabilistic reasoning with complex event patterns in the form of rules weighted in the Event Calculus.
Our results demonstrate the superiority of our novel approach, both terms efficiency and predictive.
- Score: 70.06301658267125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex Event Recognition (CER) systems detect event occurrences in streaming
time-stamped input using predefined event patterns. Logic-based approaches are
of special interest in CER, since, via Statistical Relational AI, they combine
uncertainty-resilient reasoning with time and change, with machine learning,
thus alleviating the cost of manual event pattern authoring. We present a
system based on Answer Set Programming (ASP), capable of probabilistic
reasoning with complex event patterns in the form of weighted rules in the
Event Calculus, whose structure and weights are learnt online. We compare our
ASP-based implementation with a Markov Logic-based one and with a number of
state-of-the-art batch learning algorithms on CER datasets for activity
recognition, maritime surveillance and fleet management. Our results
demonstrate the superiority of our novel approach, both in terms of efficiency
and predictive performance. This paper is under consideration for publication
in Theory and Practice of Logic Programming (TPLP).
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Learning Logic Specifications for Policy Guidance in POMDPs: an
Inductive Logic Programming Approach [57.788675205519986]
We learn high-quality traces from POMDP executions generated by any solver.
We exploit data- and time-efficient Indu Logic Programming (ILP) to generate interpretable belief-based policy specifications.
We show that learneds expressed in Answer Set Programming (ASP) yield performance superior to neural networks and similar to optimal handcrafted task-specifics within lower computational time.
arXiv Detail & Related papers (2024-02-29T15:36:01Z) - Can Learning Deteriorate Control? Analyzing Computational Delays in
Gaussian Process-Based Event-Triggered Online Learning [7.697964930378468]
We propose a novel event trigger for GP-based online learning with computational delays.
We show to offer advantages over offline trained GP models for sufficiently small computation times.
arXiv Detail & Related papers (2023-05-14T14:37:33Z) - ETLP: Event-based Three-factor Local Plasticity for online learning with
neuromorphic hardware [105.54048699217668]
We show a competitive performance in accuracy with a clear advantage in the computational complexity for Event-Based Three-factor Local Plasticity (ETLP)
We also show that when using local plasticity, threshold adaptation in spiking neurons and a recurrent topology are necessary to learntemporal patterns with a rich temporal structure.
arXiv Detail & Related papers (2023-01-19T19:45:42Z) - Alignment-based conformance checking over probabilistic events [4.060731229044571]
We introduce a weighted trace model and weighted alignment cost function, and a custom threshold parameter that controls the level of confidence on the event data.
The resulting algorithm considers activities of lower but sufficiently high probability that better align with the process model.
arXiv Detail & Related papers (2022-09-09T14:07:37Z) - Learning Automata-Based Complex Event Patterns in Answer Set Programming [0.30458514384586405]
We propose a family of automata where the transition-enabling conditions are defined by Answer Set Programming (ASP) rules.
We present such a learning approach in ASP and an incremental version thereof that trades optimality for efficiency and is capable to scale to large datasets.
We evaluate our approach on two CER datasets and compare it to state-of-the-art automata learning techniques, demonstrating empirically a superior performance.
arXiv Detail & Related papers (2022-08-31T12:40:44Z) - pRSL: Interpretable Multi-label Stacking by Learning Probabilistic Rules [0.0]
We present the probabilistic rule stacking (pRSL) which uses probabilistic propositional logic rules and belief propagation to combine the predictions of several underlying classifiers.
We derive algorithms for exact and approximate inference and learning, and show that pRSL reaches state-of-the-art performance on various benchmark datasets.
arXiv Detail & Related papers (2021-05-28T14:06:21Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.