Faster than LASER -- Towards Stream Reasoning with Deep Neural Networks
- URL: http://arxiv.org/abs/2106.08457v1
- Date: Tue, 15 Jun 2021 22:06:12 GMT
- Title: Faster than LASER -- Towards Stream Reasoning with Deep Neural Networks
- Authors: Jo\~ao Ferreira, Diogo Lavado, Ricardo Gon\c{c}alves, Matthias Knorr,
Ludwig Krippahl, and Jo\~ao Leite
- Abstract summary: Stream Reasoners aim at bridging this gap between reasoning and stream processing.
LASER is a stream reasoner designed to analyse and perform complex reasoning over streams of data.
We study whether Convolutional and Recurrent Neural Networks, which have shown to be particularly well-suited for time series forecasting and classification, can be trained to approximate reasoning with LASER.
- Score: 0.6649973446180738
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the constant increase of available data in various domains, such as the
Internet of Things, Social Networks or Smart Cities, it has become fundamental
that agents are able to process and reason with such data in real time. Whereas
reasoning over time-annotated data with background knowledge may be
challenging, due to the volume and velocity in which such data is being
produced, such complex reasoning is necessary in scenarios where agents need to
discover potential problems and this cannot be done with simple stream
processing techniques. Stream Reasoners aim at bridging this gap between
reasoning and stream processing and LASER is such a stream reasoner designed to
analyse and perform complex reasoning over streams of data. It is based on
LARS, a rule-based logical language extending Answer Set Programming, and it
has shown better runtime results than other state-of-the-art stream reasoning
systems. Nevertheless, for high levels of data throughput even LASER may be
unable to compute answers in a timely fashion. In this paper, we study whether
Convolutional and Recurrent Neural Networks, which have shown to be
particularly well-suited for time series forecasting and classification, can be
trained to approximate reasoning with LASER, so that agents can benefit from
their high processing speed.
Related papers
- State-Space Modeling in Long Sequence Processing: A Survey on Recurrence in the Transformer Era [59.279784235147254]
This survey provides an in-depth summary of the latest approaches that are based on recurrent models for sequential data processing.
The emerging picture suggests that there is room for thinking of novel routes, constituted by learning algorithms which depart from the standard Backpropagation Through Time.
arXiv Detail & Related papers (2024-06-13T12:51:22Z) - Temporal Spiking Neural Networks with Synaptic Delay for Graph Reasoning [91.29876772547348]
Spiking neural networks (SNNs) are investigated as biologically inspired models of neural computation.
This paper reveals that SNNs, when amalgamated with synaptic delay and temporal coding, are proficient in executing (knowledge) graph reasoning.
arXiv Detail & Related papers (2024-05-27T05:53:30Z) - Root Cause Analysis In Microservice Using Neural Granger Causal
Discovery [12.35924469567586]
We propose RUN, a novel approach for root cause analysis using neural Granger causal discovery with contrastive learning.
RUN enhances the backbone encoder by integrating contextual information from time series, and leverages a time series forecasting model to conduct neural Granger causal discovery.
In addition, RUN incorporates Pagerank with a vector to efficiently recommend the top-k root causes.
arXiv Detail & Related papers (2024-02-02T04:43:06Z) - Harnessing Scalable Transactional Stream Processing for Managing Large
Language Models [Vision] [4.553891255178496]
Large Language Models (LLMs) have demonstrated extraordinary performance across a broad array of applications.
This paper introduces TStreamLLM, a revolutionary framework integrating Transactional Stream Processing (TSP) with LLM management.
We showcase its potential through practical use cases like real-time patient monitoring and intelligent traffic management.
arXiv Detail & Related papers (2023-07-17T04:01:02Z) - Solving Large-scale Spatial Problems with Convolutional Neural Networks [88.31876586547848]
We employ transfer learning to improve training efficiency for large-scale spatial problems.
We propose that a convolutional neural network (CNN) can be trained on small windows of signals, but evaluated on arbitrarily large signals with little to no performance degradation.
arXiv Detail & Related papers (2023-06-14T01:24:42Z) - PARTIME: Scalable and Parallel Processing Over Time with Deep Neural
Networks [68.96484488899901]
We present PARTIME, a library designed to speed up neural networks whenever data is continuously streamed over time.
PARTIME starts processing each data sample at the time in which it becomes available from the stream.
Experiments are performed in order to empirically compare PARTIME with classic non-parallel neural computations in online learning.
arXiv Detail & Related papers (2022-10-17T14:49:14Z) - OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission [7.6058140480517356]
Online federated learning (OFL) is a promising framework to collaboratively learn a sequence of non-linear functions (or models) from distributed streaming data.
We propose a communication-efficient OFL algorithm (named OFedQIT) by means of a quantization and an intermittent transmission.
Our analysis reveals that OFedQIT successfully addresses the drawbacks of OFedAvg while maintaining superior learning accuracy.
arXiv Detail & Related papers (2022-05-13T07:46:43Z) - Mitigating Performance Saturation in Neural Marked Point Processes:
Architectures and Loss Functions [50.674773358075015]
We propose a simple graph-based network structure called GCHP, which utilizes only graph convolutional layers.
We show that GCHP can significantly reduce training time and the likelihood ratio loss with interarrival time probability assumptions can greatly improve the model performance.
arXiv Detail & Related papers (2021-07-07T16:59:14Z) - Deep Neural Networks for Approximating Stream Reasoning with C-SPARQL [0.8677532138573983]
C-SPARQL is a language for continuous queries over streams of RDF data.
We investigate whether reasoning with C-SPARQL can be approximated using Recurrent Neural Networks and Convolutional Neural Networks.
arXiv Detail & Related papers (2021-06-15T21:51:47Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - LUNAR: Cellular Automata for Drifting Data Streams [19.98517714325424]
We propose LUNAR, a streamified version of cellular automata.
It is able to act as a real incremental learner while adapting to drifting conditions.
arXiv Detail & Related papers (2020-02-06T09:10:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.