Complex Reasoning over Logical Queries on Commonsense Knowledge Graphs
- URL: http://arxiv.org/abs/2403.07398v2
- Date: Sat, 22 Jun 2024 17:32:05 GMT
- Title: Complex Reasoning over Logical Queries on Commonsense Knowledge Graphs
- Authors: Tianqing Fang, Zeming Chen, Yangqiu Song, Antoine Bosselut,
- Abstract summary: We present COM2 (COMplex COMmonsense), a new dataset created by sampling logical queries.
We verbalize them using handcrafted rules and large language models into multiple-choice and text generation questions.
Experiments show that language models trained on COM2 exhibit significant improvements in complex reasoning ability.
- Score: 61.796960984541464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event commonsense reasoning requires the ability to reason about the relationship between events, as well as infer implicit context underlying that relationship. However, data scarcity makes it challenging for language models to learn to generate commonsense inferences for contexts and questions involving interactions between complex events. To address this demand, we present COM2 (COMplex COMmonsense), a new dataset created by sampling multi-hop logical queries (e.g., the joint effect or cause of both event A and B, or the effect of the effect of event C) from an existing commonsense knowledge graph (CSKG), and verbalizing them using handcrafted rules and large language models into multiple-choice and text generation questions. Our experiments show that language models trained on COM2 exhibit significant improvements in complex reasoning ability, resulting in enhanced zero-shot performance in both in-domain and out-of-domain tasks for question answering and generative commonsense reasoning, without expensive human annotations. Code and data are available at https://github.com/tqfang/complex-commonsense-reasoning.
Related papers
- Advancing Event Causality Identification via Heuristic Semantic Dependency Inquiry Network [11.726799701525131]
Event Causality Identification (ECI) focuses on extracting causal relations between events in texts.
We propose SemDI - a simple and effective Semantic Dependency Inquiry Network for ECI.
arXiv Detail & Related papers (2024-09-20T16:32:54Z) - Leveraging Inter-Chunk Interactions for Enhanced Retrieval in Large Language Model-Based Question Answering [12.60063463163226]
IIER captures the internal connections between document chunks by considering three types of interactions: structural, keyword, and semantic.
It identifies multiple seed nodes based on the target question and iteratively searches for relevant chunks to gather supporting evidence.
It refines the context and reasoning chain, aiding the large language model in reasoning and answer generation.
arXiv Detail & Related papers (2024-08-06T02:39:55Z) - Retrieval-Augmented Language Model for Extreme Multi-Label Knowledge Graph Link Prediction [2.6749568255705656]
Extrapolation in large language models (LLMs) for open-ended inquiry encounters two pivotal issues.
Existing works attempt to tackle the problem by augmenting the input of a smaller language model with information from a knowledge graph.
We propose a new task, the extreme multi-label KG link prediction task, to enable a model to perform extrapolation with multiple responses.
arXiv Detail & Related papers (2024-05-21T10:10:56Z) - Asking and Answering Questions to Extract Event-Argument Structures [7.997025284201876]
This paper presents a question-answering approach to extract document-level event-argument structures.
We automatically ask and answer questions for each argument type an event may have.
We use a simple span-swapping technique, coreference resolution, and large language models to augment the training instances.
arXiv Detail & Related papers (2024-04-25T08:43:06Z) - UniKGQA: Unified Retrieval and Reasoning for Solving Multi-hop Question
Answering Over Knowledge Graph [89.98762327725112]
Multi-hop Question Answering over Knowledge Graph(KGQA) aims to find the answer entities that are multiple hops away from the topic entities mentioned in a natural language question.
We propose UniKGQA, a novel approach for multi-hop KGQA task, by unifying retrieval and reasoning in both model architecture and parameter learning.
arXiv Detail & Related papers (2022-12-02T04:08:09Z) - Cross-Modal Causal Relational Reasoning for Event-Level Visual Question
Answering [134.91774666260338]
Existing visual question answering methods often suffer from cross-modal spurious correlations and oversimplified event-level reasoning processes.
We propose a framework for cross-modal causal relational reasoning to address the task of event-level visual question answering.
arXiv Detail & Related papers (2022-07-26T04:25:54Z) - EA$^2$E: Improving Consistency with Event Awareness for Document-Level
Argument Extraction [52.43978926985928]
We introduce the Event-Aware Argument Extraction (EA$2$E) model with augmented context for training and inference.
Experiment results on WIKIEVENTS and ACE2005 datasets demonstrate the effectiveness of EA$2$E.
arXiv Detail & Related papers (2022-05-30T04:33:51Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - CIDER: Commonsense Inference for Dialogue Explanation and Reasoning [31.354769524093125]
CIDER -- a manually curated dataset -- contains dyadic dialogue explanations in the form of implicit and explicit knowledge triplets inferred using commonsense inference.
We set up three different tasks conditioned on the dataset: Dialogue-level Natural Language Inference, Span Extraction, and Multi-choice Span Selection.
Results obtained with transformer-based models reveal that the tasks are difficult, paving the way for promising future research.
arXiv Detail & Related papers (2021-06-01T14:14:46Z) - GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and
Event Extraction [107.8262586956778]
We introduce graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations.
GCNs struggle to model words with long-range dependencies or are not directly connected in the dependency tree.
We propose to utilize the self-attention mechanism to learn the dependencies between words with different syntactic distances.
arXiv Detail & Related papers (2020-10-06T20:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.