Social Commonsense Reasoning with Multi-Head Knowledge Attention
- URL: http://arxiv.org/abs/2010.05587v1
- Date: Mon, 12 Oct 2020 10:24:40 GMT
- Title: Social Commonsense Reasoning with Multi-Head Knowledge Attention
- Authors: Debjit Paul and Anette Frank
- Abstract summary: Social Commonsense Reasoning requires understanding of text, knowledge about social events and their pragmatic implications, as well as commonsense reasoning skills.
We propose a novel multi-head knowledge attention model that encodes semi-structured commonsense inference rules and learns to incorporate them in a transformer-based reasoning cell.
- Score: 24.70946979449572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social Commonsense Reasoning requires understanding of text, knowledge about
social events and their pragmatic implications, as well as commonsense
reasoning skills. In this work we propose a novel multi-head knowledge
attention model that encodes semi-structured commonsense inference rules and
learns to incorporate them in a transformer-based reasoning cell. We assess the
model's performance on two tasks that require different reasoning skills:
Abductive Natural Language Inference and Counterfactual Invariance Prediction
as a new task. We show that our proposed model improves performance over strong
state-of-the-art models (i.e., RoBERTa) across both reasoning tasks. Notably we
are, to the best of our knowledge, the first to demonstrate that a model that
learns to perform counterfactual reasoning helps predicting the best
explanation in an abductive reasoning task. We validate the robustness of the
model's reasoning capabilities by perturbing the knowledge and provide
qualitative analysis on the model's knowledge incorporation capabilities.
Related papers
- Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations [62.71847873326847]
We investigate the ability to model unusual, unexpected, and unlikely situations.
Given a piece of context with an unexpected outcome, this task requires reasoning abductively to generate an explanation.
We release a new English language corpus called UNcommonsense.
arXiv Detail & Related papers (2023-11-14T19:00:55Z) - Crystal: Introspective Reasoners Reinforced with Self-Feedback [118.53428015478957]
We propose a novel method to develop an introspective commonsense reasoner, Crystal.
To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge.
Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process.
arXiv Detail & Related papers (2023-10-07T21:23:58Z) - CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning
Capabilities of Natural Language Models [30.63276809199399]
We present CommonsenseVIS, a visual explanatory system that utilizes external commonsense knowledge bases to contextualize model behavior for commonsense question-answering.
Our system features multi-level visualization and interactive model probing and editing for different concepts and their underlying relations.
arXiv Detail & Related papers (2023-07-23T17:16:13Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - Causality in Neural Networks -- An Extended Abstract [0.0]
Causal reasoning is the main learning and explanation tool used by humans.
Introducing the ideas of causality to machine learning helps in providing better learning and explainable models.
arXiv Detail & Related papers (2021-06-03T09:52:36Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z) - Towards Interpretable Reasoning over Paragraph Effects in Situation [126.65672196760345]
We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect.
We propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules.
In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model.
arXiv Detail & Related papers (2020-10-03T04:03:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.