Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via
Compositional Uncertainty Quantification
- URL: http://arxiv.org/abs/2301.11459v1
- Date: Thu, 26 Jan 2023 23:11:03 GMT
- Title: Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via
Compositional Uncertainty Quantification
- Authors: Zi Lin, Jeremiah Liu, Jingbo Shang
- Abstract summary: We study compositionality-aware approach to neural-symbolic inference informed by model confidence.
We empirically investigate the approach in the English Resource (ERG) parsing problem on a diverse suite of standard in-domain and seven OOD Grammar.
Our approach leads to 35.26% and 35.60% error reduction in aggregated S Grammar score over neural and symbolic approaches respectively, and 14% absolute accuracy gain in key tail linguistic categories over the neural model.
- Score: 28.084115398817016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained seq2seq models excel at graph semantic parsing with rich
annotated data, but generalize worse to out-of-distribution (OOD) and long-tail
examples. In comparison, symbolic parsers under-perform on population-level
metrics, but exhibit unique strength in OOD and tail generalization. In this
work, we study compositionality-aware approach to neural-symbolic inference
informed by model confidence, performing fine-grained neural-symbolic reasoning
at subgraph level (i.e., nodes and edges) and precisely targeting subgraph
components with high uncertainty in the neural parser. As a result, the method
combines the distinct strength of the neural and symbolic approaches in
capturing different aspects of the graph prediction, leading to well-rounded
generalization performance both across domains and in the tail. We empirically
investigate the approach in the English Resource Grammar (ERG) parsing problem
on a diverse suite of standard in-domain and seven OOD corpora. Our approach
leads to 35.26% and 35.60% error reduction in aggregated Smatch score over
neural and symbolic approaches respectively, and 14% absolute accuracy gain in
key tail linguistic categories over the neural model, outperforming prior
state-of-art methods that do not account for compositionality or uncertainty.
Related papers
- Exact Subgraph Isomorphism Network for Predictive Graph Mining [6.926467730065948]
We propose Exact subgraph Isomorphism Network (EIN), which combines the exact subgraph enumeration, neural network, and a sparse regularization.<n>EIN has sufficiently high prediction performance compared with standard graph neural network models.
arXiv Detail & Related papers (2025-09-25T23:49:26Z) - SaVe-TAG: Semantic-aware Vicinal Risk Minimization for Long-Tailed Text-Attributed Graphs [16.24571541782205]
Real-world graph data often follows long-tailed distributions, making it difficult for Graph Neural Networks (GNNs) to generalize well across both head and tail classes.<n>Recent advances in Vicinal Risk Minimization (VRM) have shown promise in mitigating class imbalance with numeric semantics.
arXiv Detail & Related papers (2024-10-22T10:36:15Z) - CombAlign: Enhancing Model Expressiveness in Unsupervised Graph Alignment [19.502687203792547]
Unsupervised graph alignment finds the node correspondence between a pair of graphs attributed by only exploiting graph structure and node features.
One category of recent studies first computes the node representation and then matches nodes with the largest embedding-based similarity.
The other category reduces the problem to optimal transport (OT) via Gromov-Wasserstein learning.
We investigate the model's discriminative power in distinguishing matched and unmatched node pairs across two graphs.
Motivated by our theoretical analysis, we put forward a hybrid approach named CombAlign with stronger expressive power.
arXiv Detail & Related papers (2024-06-19T04:57:35Z) - Graph Out-of-Distribution Generalization via Causal Intervention [69.70137479660113]
We introduce a conceptually simple yet principled approach for training robust graph neural networks (GNNs) under node-level distribution shifts.
Our method resorts to a new learning objective derived from causal inference that coordinates an environment estimator and a mixture-of-expert GNN predictor.
Our model can effectively enhance generalization with various types of distribution shifts and yield up to 27.4% accuracy improvement over state-of-the-arts on graph OOD generalization benchmarks.
arXiv Detail & Related papers (2024-02-18T07:49:22Z) - Neural Network-Based Score Estimation in Diffusion Models: Optimization
and Generalization [12.812942188697326]
Diffusion models have emerged as a powerful tool rivaling GANs in generating high-quality samples with improved fidelity, flexibility, and robustness.
A key component of these models is to learn the score function through score matching.
Despite empirical success on various tasks, it remains unclear whether gradient-based algorithms can learn the score function with a provable accuracy.
arXiv Detail & Related papers (2024-01-28T08:13:56Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural
Networks [89.28881869440433]
This paper provides the first theoretical characterization of joint edge-model sparse learning for graph neural networks (GNNs)
It proves analytically that both sampling important nodes and pruning neurons with the lowest-magnitude can reduce the sample complexity and improve convergence without compromising the test accuracy.
arXiv Detail & Related papers (2023-02-06T16:54:20Z) - Predicting the generalization gap in neural networks using topological
data analysis [33.511371257571504]
We study the generalization gap of neural networks using methods from topological data analysis.
We compute homological persistence diagrams of weighted graphs constructed from neuron activation correlations after a training phase.
We compare the usefulness of different numerical summaries from persistence diagrams and show that a combination of some of them can accurately predict and partially explain the generalization gap without the need of a test set.
arXiv Detail & Related papers (2022-03-23T11:15:36Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - Does BERT look at sentiment lexicon? [0.0]
We study the attention weights matrices of the Russian-language RuBERT model.
We fine-tune RuBERT on sentiment text corpora and compare the distributions of attention weights for sentiment and neutral lexicons.
arXiv Detail & Related papers (2021-11-19T08:50:48Z) - Correlation Analysis between the Robustness of Sparse Neural Networks
and their Random Hidden Structural Priors [0.0]
We aim to investigate any existing correlations between graph theoretic properties and the robustness of Sparse Neural Networks.
Our hypothesis is, that graph theoretic properties as a prior of neural network structures are related to their robustness.
arXiv Detail & Related papers (2021-07-13T15:13:39Z) - Maximum Spanning Trees Are Invariant to Temperature Scaling in
Graph-based Dependency Parsing [0.0]
Modern graph-based syntactic dependencys operate by predicting, for each token within a sentence, a probability distribution over its possible syntactic heads.
We prove that temperature scaling, a popular technique for post-hoc calibration of neural networks, cannot change the output of the parsing procedure.
We conclude that other techniques are needed to tackle miscalibration in graph-based dependencys in a way that improves accuracy.
arXiv Detail & Related papers (2021-06-15T13:57:24Z) - Learning compositional structures for semantic graph parsing [81.41592892863979]
We show how AM dependency parsing can be trained directly on a neural latent-variable model.
Our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training.
arXiv Detail & Related papers (2021-06-08T14:20:07Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.