Neural RELAGGS
- URL: http://arxiv.org/abs/2211.02363v1
- Date: Fri, 4 Nov 2022 10:42:21 GMT
- Title: Neural RELAGGS
- Authors: Lukas Pensel and Stefan Kramer
- Abstract summary: Multi-relational databases are the basis of most consolidated data collections in science and industry today.
propositionalization algorithms transform multi-relational databases into propositional data sets.
We propose a new neural network based algorithm in the spirit of RELAGGS that employs trainable composite aggregate functions.
- Score: 7.690774882108066
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-relational databases are the basis of most consolidated data
collections in science and industry today. Most learning and mining algorithms,
however, require data to be represented in a propositional form. While there is
a variety of specialized machine learning algorithms that can operate directly
on multi-relational data sets, propositionalization algorithms transform
multi-relational databases into propositional data sets, thereby allowing the
application of traditional machine learning and data mining algorithms without
their modification. One prominent propositionalization algorithm is RELAGGS by
Krogel and Wrobel, which transforms the data by nested aggregations. We propose
a new neural network based algorithm in the spirit of RELAGGS that employs
trainable composite aggregate functions instead of the static aggregate
functions used in the original approach. In this way, we can jointly train the
propositionalization with the prediction model, or, alternatively, use the
learned aggegrations as embeddings in other algorithms. We demonstrate the
increased predictive performance by comparing N-RELAGGS with RELAGGS and
multiple other state-of-the-art algorithms.
Related papers
- Relation-aware Ensemble Learning for Knowledge Graph Embedding [68.94900786314666]
We propose to learn an ensemble by leveraging existing methods in a relation-aware manner.
exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods.
We propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently.
arXiv Detail & Related papers (2023-10-13T07:40:12Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - Context-Aware Ensemble Learning for Time Series [11.716677452529114]
We introduce a new approach using a meta learner that effectively combines the base model predictions via using a superset of the features that is the union of the base models' feature vectors instead of the predictions themselves.
Our model does not use the predictions of the base models as inputs to a machine learning algorithm, but choose the best possible combination at each time step based on the state of the problem.
arXiv Detail & Related papers (2022-11-30T10:36:13Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Distributed Estimation of Sparse Inverse Covariance Matrices [0.7832189413179361]
We propose a distributed sparse inverse covariance algorithm to learn the network structure in real-time from data collected by distributed agents.
Our approach is built on an online graphical alternating minimization algorithm, augmented with a consensus term that allows agents to learn the desired structure cooperatively.
arXiv Detail & Related papers (2021-09-24T15:26:41Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - A Novel Surrogate-assisted Evolutionary Algorithm Applied to
Partition-based Ensemble Learning [0.0]
We propose a novel surrogate-assisted Algorithm for solving expensive optimization problems.
We integrate a surrogate model, which is used for fitness value estimation, into a state-of-the-art P3-like variant of the Evolutionary Gene- Evolutionary Optimal Mixing Algorithm.
We test the proposed algorithm on an ensemble learning problem.
arXiv Detail & Related papers (2021-04-16T11:51:18Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Learning ODE Models with Qualitative Structure Using Gaussian Processes [0.6882042556551611]
In many contexts explicit data collection is expensive and learning algorithms must be data-efficient to be feasible.
We propose an approach to learning a vector field of differential equations using sparse Gaussian Processes.
We show that this combination improves extrapolation performance and long-term behaviour significantly, while also reducing the computational cost.
arXiv Detail & Related papers (2020-11-10T19:34:07Z) - Fast Reinforcement Learning with Incremental Gaussian Mixture Models [0.0]
An online and incremental algorithm capable of learning from a single pass through data, called Incremental Gaussian Mixture Network (IGMN), was employed as a sample-efficient function approximator for the joint state and Q-values space.
Results are analyzed to explain the properties of the obtained algorithm, and it is observed that the use of the IGMN function approximator brings some important advantages to reinforcement learning in relation to conventional neural networks trained by gradient descent methods.
arXiv Detail & Related papers (2020-11-02T03:18:15Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.