SPARE: A Single-Pass Neural Model for Relational Databases
- URL: http://arxiv.org/abs/2310.13581v1
- Date: Fri, 20 Oct 2023 15:23:17 GMT
- Title: SPARE: A Single-Pass Neural Model for Relational Databases
- Authors: Benjamin Hilprecht, Kristian Kersting and Carsten Binnig
- Abstract summary: We propose SPARE, a new class of neural models that can be trained efficiently on RDBs while providing similar accuracies as GNNs.
For enabling efficient training, different from GNNs, SPARE makes use of the fact that data in RDBs has a predictive regular structure, which allows one to train these models in a single pass while exploiting symmetries at the same time.
- Score: 36.55513135391452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While there has been extensive work on deep neural networks for images and
text, deep learning for relational databases (RDBs) is still a rather
unexplored field.
One direction that recently gained traction is to apply Graph Neural Networks
(GNNs) to RBDs. However, training GNNs on large relational databases (i.e.,
data stored in multiple database tables) is rather inefficient due to multiple
rounds of training and potentially large and inefficient representations.
Hence, in this paper we propose SPARE (Single-Pass Relational models), a new
class of neural models that can be trained efficiently on RDBs while providing
similar accuracies as GNNs. For enabling efficient training, different from
GNNs, SPARE makes use of the fact that data in RDBs has a regular structure,
which allows one to train these models in a single pass while exploiting
symmetries at the same time. Our extensive empirical evaluation demonstrates
that SPARE can significantly speedup both training and inference while offering
competitive predictive performance over numerous baselines.
Related papers
- Parallel Multi-path Feed Forward Neural Networks (PMFFNN) for Long Columnar Datasets: A Novel Approach to Complexity Reduction [0.0]
We introduce a novel architecture called Parallel Multi-path Feed Forward Neural Networks (PMFFNN)
By doing so, the architecture ensures that each subset of features receives focused attention, which is often neglected in traditional models.
PMFFNN outperforms traditional FFNNs and 1D CNNs, providing an optimized solution for managing large-scale data.
arXiv Detail & Related papers (2024-11-09T00:48:32Z) - Training Better Deep Learning Models Using Human Saliency [11.295653130022156]
This work explores how human judgement about salient regions of an image can be introduced into deep convolutional neural network (DCNN) training.
We propose a new component of the loss function that ConveYs Brain Oversight to Raise Generalization (CYBORG) and penalizes the model for using non-salient regions.
arXiv Detail & Related papers (2024-10-21T16:52:44Z) - Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One [60.5818387068983]
Graph neural networks (GNN) suffer from severe inefficiency.
We propose to decouple a multi-layer GNN as multiple simple modules for more efficient training.
We show that the proposed framework is highly efficient with reasonable performance.
arXiv Detail & Related papers (2023-04-20T07:21:32Z) - Poster: Link between Bias, Node Sensitivity and Long-Tail Distribution
in trained DNNs [12.404169549562523]
Training datasets with long-tail distribution pose a challenge for deep neural networks (DNNs)
This work identifies the node bias that leads to a varying sensitivity of the nodes for different output classes.
We support our reasoning using an empirical case study of the networks trained on a real-world dataset.
arXiv Detail & Related papers (2023-03-29T10:49:31Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Simple Recurrent Neural Networks is all we need for clinical events
predictions using EHR data [22.81278657120305]
Recurrent neural networks (RNNs) are common architecture for EHR-based clinical events predictive models.
We used two prediction tasks: the risk for developing heart failure and the risk of early readmission for inpatient hospitalization.
We found that simple gated RNN models, including GRUs and LSTMs, often offer competitive results when properly tuned with Bayesian Optimization.
arXiv Detail & Related papers (2021-10-03T13:07:23Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.