Relate and Predict: Structure-Aware Prediction with Jointly Optimized
Neural DAG
- URL: http://arxiv.org/abs/2103.02405v1
- Date: Wed, 3 Mar 2021 13:55:12 GMT
- Title: Relate and Predict: Structure-Aware Prediction with Jointly Optimized
Neural DAG
- Authors: Arshdeep Sekhon, Zhe Wang, Yanjun Qi
- Abstract summary: We propose a deep neural network framework, dGAP, to learn neural dependency Graph and optimize structure-Aware target Prediction.
dGAP trains towards a structure self-supervision loss and a target prediction loss jointly.
We empirically evaluate dGAP on multiple simulated and real datasets.
- Score: 13.636680313054631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding relationships between feature variables is one important way
humans use to make decisions. However, state-of-the-art deep learning studies
either focus on task-agnostic statistical dependency learning or do not model
explicit feature dependencies during prediction. We propose a deep neural
network framework, dGAP, to learn neural dependency Graph and optimize
structure-Aware target Prediction simultaneously. dGAP trains towards a
structure self-supervision loss and a target prediction loss jointly. Our
method leads to an interpretable model that can disentangle sparse feature
relationships, informing the user how relevant dependencies impact the target
task. We empirically evaluate dGAP on multiple simulated and real datasets.
dGAP is not only more accurate, but can also recover correct dependency
structure.
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - IGANN Sparse: Bridging Sparsity and Interpretability with Non-linear Insight [4.010646933005848]
IGANN Sparse is a novel machine learning model from the family of generalized additive models.
It promotes sparsity through a non-linear feature selection process during training.
This ensures interpretability through improved model sparsity without sacrificing predictive performance.
arXiv Detail & Related papers (2024-03-17T22:44:36Z) - Adaptive Dependency Learning Graph Neural Networks [5.653058780958551]
We propose a hybrid approach combining neural networks and statistical structure learning models to self-learn dependencies.
We demonstrate significantly improved performance using our proposed approach on real-world benchmark datasets without a pre-defined dependency graph.
arXiv Detail & Related papers (2023-12-06T20:56:23Z) - RDGCN: Reinforced Dependency Graph Convolutional Network for
Aspect-based Sentiment Analysis [43.715099882489376]
We propose a new reinforced dependency graph convolutional network (RDGCN) that improves the importance calculation of dependencies in both distance and type views.
Under the criterion, we design a distance-importance function that leverages reinforcement learning for weight distribution search and dissimilarity control.
Comprehensive experiments on three popular datasets demonstrate the effectiveness of the criterion and importance functions.
arXiv Detail & Related papers (2023-11-08T05:37:49Z) - Revisiting Link Prediction: A Data Perspective [59.296773787387224]
Link prediction, a fundamental task on graphs, has proven indispensable in various applications, e.g., friend recommendation, protein analysis, and drug interaction prediction.
Evidence in existing literature underscores the absence of a universally best algorithm suitable for all datasets.
We recognize three fundamental factors critical to link prediction: local structural proximity, global structural proximity, and feature proximity.
arXiv Detail & Related papers (2023-10-01T21:09:59Z) - Graph-enabled Reinforcement Learning for Time Series Forecasting with
Adaptive Intelligence [11.249626785206003]
We propose a novel approach for predicting time-series data using Graphical neural network (GNN) and monitoring with Reinforcement Learning (RL)
GNNs are able to explicitly incorporate the graph structure of the data into the model, allowing them to capture temporal dependencies in a more natural way.
This approach allows for more accurate predictions in complex temporal structures, such as those found in healthcare, traffic and weather forecasting.
arXiv Detail & Related papers (2023-09-18T22:25:12Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - ARM-Net: Adaptive Relation Modeling Network for Structured Data [29.94433633729326]
ARM-Net is an adaptive relation modeling network tailored for structured data and a lightweight framework ARMOR based on ARM-Net for relational data.
We show that ARM-Net consistently outperforms existing models and provides more interpretable predictions for datasets.
arXiv Detail & Related papers (2021-07-05T07:37:24Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.