ConstraintChecker: A Plugin for Large Language Models to Reason on
Commonsense Knowledge Bases
- URL: http://arxiv.org/abs/2401.14003v1
- Date: Thu, 25 Jan 2024 08:03:38 GMT
- Title: ConstraintChecker: A Plugin for Large Language Models to Reason on
Commonsense Knowledge Bases
- Authors: Quyet V. Do, Tianqing Fang, Shizhe Diao, Zhaowei Wang, Yangqiu Song
- Abstract summary: Reasoning over Commonsense Knowledge Bases (CSKB) has been explored as a way to acquire new commonsense knowledge.
We propose **ConstraintChecker**, a plugin over prompting techniques to provide and check explicit constraints.
- Score: 53.29427395419317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reasoning over Commonsense Knowledge Bases (CSKB), i.e. CSKB reasoning, has
been explored as a way to acquire new commonsense knowledge based on reference
knowledge in the original CSKBs and external prior knowledge. Despite the
advancement of Large Language Models (LLM) and prompt engineering techniques in
various reasoning tasks, they still struggle to deal with CSKB reasoning. One
of the problems is that it is hard for them to acquire explicit relational
constraints in CSKBs from only in-context exemplars, due to a lack of symbolic
reasoning capabilities (Bengio et al., 2021). To this end, we proposed
**ConstraintChecker**, a plugin over prompting techniques to provide and check
explicit constraints. When considering a new knowledge instance,
ConstraintChecker employs a rule-based module to produce a list of constraints,
then it uses a zero-shot learning module to check whether this knowledge
instance satisfies all constraints. The acquired constraint-checking result is
then aggregated with the output of the main prompting technique to produce the
final output. Experimental results on CSKB Reasoning benchmarks demonstrate the
effectiveness of our method by bringing consistent improvements over all
prompting methods. Codes and data are available at
\url{https://github.com/HKUST-KnowComp/ConstraintChecker}.
Related papers
- A Knowledge-Injected Curriculum Pretraining Framework for Question Answering [70.13026036388794]
We propose a general Knowledge-Injected Curriculum Pretraining framework (KICP) to achieve comprehensive KG learning and exploitation for Knowledge-based question answering tasks.
The KI module first injects knowledge into the LM by generating KG-centered pretraining corpus, and generalizes the process into three key steps.
The KA module learns knowledge from the generated corpus with LM equipped with an adapter as well as keeps its original natural language understanding ability.
The CR module follows human reasoning patterns to construct three corpora with increasing difficulties of reasoning, and further trains the LM from easy to hard in a curriculum manner.
arXiv Detail & Related papers (2024-03-11T03:42:03Z) - From Instructions to Constraints: Language Model Alignment with
Automatic Constraint Verification [70.08146540745877]
We investigate common constraints in NLP tasks, categorize them into three classes based on the types of their arguments.
We propose a unified framework, ACT (Aligning to ConsTraints), to automatically produce supervision signals for user alignment with constraints.
arXiv Detail & Related papers (2024-03-10T22:14:54Z) - DeepEdit: Knowledge Editing as Decoding with Constraints [118.78008395850888]
How to edit the knowledge in multi-step reasoning has become the major challenge in the knowledge editing (KE) of large language models (LLMs)
We propose a new KE framework: DEEPEDIT, which enhances LLMs's ability to generate coherent reasoning chains with new knowledge through depth-first search.
In addition to DEEPEDIT, we propose two new KE benchmarks: MQUAKE-2002 and MQUAKE-HARD, which provide more precise and challenging assessments of KE approaches.
arXiv Detail & Related papers (2024-01-19T03:48:27Z) - Learning to Learn in Interactive Constraint Acquisition [7.741303298648302]
In Constraint Acquisition (CA), the goal is to assist the user by automatically learning the model.
In (inter)active CA, this is done by interactively posting queries to the user.
We propose to use probabilistic classification models to guide interactive CA to generate more promising queries.
arXiv Detail & Related papers (2023-12-17T19:12:33Z) - CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense
Question Answering [56.592385613002584]
We propose Conceptualization-Augmented Reasoner (CAR) to tackle the task of zero-shot commonsense question answering.
CAR abstracts a commonsense knowledge triple to many higher-level instances, which increases the coverage of CommonSense Knowledge Bases.
CAR more robustly generalizes to answering questions about zero-shot commonsense scenarios than existing methods.
arXiv Detail & Related papers (2023-05-24T08:21:31Z) - Bounding the Capabilities of Large Language Models in Open Text
Generation with Prompt Constraints [38.69469206527995]
We take a prompt-centric approach to analyzing and bounding the abilities of open-ended generative models.
We present a generic methodology of analysis with two challenging prompt constraint types: structural and stylistic.
Our results and our in-context mitigation strategies reveal open challenges for future research.
arXiv Detail & Related papers (2023-02-17T23:30:28Z) - TIARA: Multi-grained Retrieval for Robust Question Answering over Large
Knowledge Bases [20.751369684593985]
TIARA outperforms previous SOTA, including those using PLMs or oracle entity annotations, by at least 4.1 and 1.1 F1 points on GrailQA and WebQuestionsSP.
arXiv Detail & Related papers (2022-10-24T02:41:10Z) - BoxE: A Box Embedding Model for Knowledge Base Completion [53.57588201197374]
Knowledge base completion (KBC) aims to automatically infer missing facts by exploiting information already present in a knowledge base (KB)
Existing embedding models are subject to at least one of the following limitations.
BoxE embeds entities as points, and relations as a set of hyper-rectangles (or boxes)
arXiv Detail & Related papers (2020-07-13T09:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.