Learning Representations for Reasoning: Generalizing Across Diverse Structures
- URL: http://arxiv.org/abs/2410.13018v1
- Date: Wed, 16 Oct 2024 20:23:37 GMT
- Title: Learning Representations for Reasoning: Generalizing Across Diverse Structures
- Authors: Zhaocheng Zhu,
- Abstract summary: We aim to push the boundary of reasoning models by devising algorithms that generalize across knowledge and query structures.
Our library treats structured data as first-class citizens and removes the barrier for developing algorithms on structured data.
- Score: 5.031093893882575
- License:
- Abstract: Reasoning, the ability to logically draw conclusions from existing knowledge, is a hallmark of human. Together with perception, they constitute the two major themes of artificial intelligence. While deep learning has pushed the limit of perception beyond human-level performance, the progress in reasoning domains is way behind. One fundamental reason is that reasoning problems usually have flexible structures for both knowledge and queries, and many existing models only perform well on structures seen during training. Here we aim to push the boundary of reasoning models by devising algorithms that generalize across knowledge and query structures, as well as systems that accelerate development on structured data. This thesis consists of three parts. In Part I, we study models that can inductively generalize to unseen knowledge graphs with new entity and relation vocabularies. For new entities, we propose a framework that learns neural operators in a dynamic programming algorithm computing path representations. For relations, we construct a relation graph to capture the interactions between relations, thereby converting new relations into new entities. In Part II, we propose two solutions for generalizing across multi-step queries on knowledge graphs and text respectively. For knowledge graphs, we show that multi-step queries can be solved by multiple calls of graph neural networks and fuzzy logic operations. For text, we devise an algorithm to learn explicit knowledge as textual rules to improve large language models on multi-step queries. In Part III, we propose two systems to facilitate machine learning development on structured data. Our library treats structured data as first-class citizens and removes the barrier for developing algorithms on structured data. Our node embedding system solves the GPU memory bottleneck of embedding matrices and scales to graphs with billion nodes.
Related papers
- Konstruktor: A Strong Baseline for Simple Knowledge Graph Question Answering [60.6042489577575]
We introduce Konstruktor - an efficient and robust approach that breaks down the problem into three steps.
Our approach integrates language models and knowledge graphs, exploiting the power of the former and the interpretability of the latter.
We show that for relation detection, the most challenging step of the workflow, a combination of relation classification/generation and ranking outperforms other methods.
arXiv Detail & Related papers (2024-09-24T09:19:11Z) - G-SAP: Graph-based Structure-Aware Prompt Learning over Heterogeneous Knowledge for Commonsense Reasoning [8.02547453169677]
We propose a novel Graph-based Structure-Aware Prompt Learning Model for commonsense reasoning, named G-SAP.
In particular, an evidence graph is constructed by integrating multiple knowledge sources, i.e. ConceptNet, Wikipedia, and Cambridge Dictionary.
The results reveal a significant advancement over the existing models, especially, with 6.12% improvement over the SoTA LM+GNNs model on the OpenbookQA dataset.
arXiv Detail & Related papers (2024-05-09T08:28:12Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Breaking the Curse of Dimensionality in Deep Neural Networks by Learning
Invariant Representations [1.9580473532948401]
This thesis explores the theoretical foundations of deep learning by studying the relationship between the architecture of these models and the inherent structures found within the data they process.
We ask What drives the efficacy of deep learning algorithms and allows them to beat the so-called curse of dimensionality.
Our methodology takes an empirical approach to deep learning, combining experimental studies with physics-inspired toy models.
arXiv Detail & Related papers (2023-10-24T19:50:41Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Entity Context Graph: Learning Entity Representations
fromSemi-Structured Textual Sources on the Web [44.92858943475407]
We propose an approach that processes entity centric textual knowledge sources to learn entity embeddings.
We show that the embeddings learned from our approach are: (i) high quality and comparable to a known knowledge graph-based embeddings and can be used to improve them further.
arXiv Detail & Related papers (2021-03-29T20:52:14Z) - Exploiting Contextual Information with Deep Neural Networks [5.787117733071416]
We show that contextual information can be exploited in 2 fundamentally different ways: implicitly and explicitly.
In this thesis, we show that contextual information can be exploited in 2 fundamentally different ways: implicitly and explicitly.
arXiv Detail & Related papers (2020-06-21T03:40:30Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.