Do Children Texts Hold The Key To Commonsense Knowledge?
- URL: http://arxiv.org/abs/2210.04530v1
- Date: Mon, 10 Oct 2022 09:56:08 GMT
- Title: Do Children Texts Hold The Key To Commonsense Knowledge?
- Authors: Julien Romero and Simon Razniewski
- Abstract summary: This paper explores whether children's texts hold the key to commonsense knowledge compilation.
An analysis with several corpora shows that children's texts indeed contain much more, and more typical commonsense assertions.
Experiments show that this advantage can be leveraged in popular language-model-based commonsense knowledge extraction settings.
- Score: 14.678465723838599
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compiling comprehensive repositories of commonsense knowledge is a
long-standing problem in AI. Many concerns revolve around the issue of
reporting bias, i.e., that frequency in text sources is not a good proxy for
relevance or truth. This paper explores whether children's texts hold the key
to commonsense knowledge compilation, based on the hypothesis that such content
makes fewer assumptions on the reader's knowledge, and therefore spells out
commonsense more explicitly. An analysis with several corpora shows that
children's texts indeed contain much more, and more typical commonsense
assertions. Moreover, experiments show that this advantage can be leveraged in
popular language-model-based commonsense knowledge extraction settings, where
task-unspecific fine-tuning on small amounts of children texts (childBERT)
already yields significant improvements. This provides a refreshing perspective
different from the common trend of deriving progress from ever larger models
and corpora.
Related papers
- What Really is Commonsense Knowledge? [58.5342212738895]
We survey existing definitions of commonsense knowledge, ground into the three frameworks for defining concepts, and consolidate them into a unified definition of commonsense knowledge.
We then use the consolidated definition for annotations and experiments on the CommonsenseQA and CommonsenseQA 2.0 datasets.
Our study shows that there exists a large portion of non-commonsense-knowledge instances in the two datasets, and a large performance gap on these two subsets.
arXiv Detail & Related papers (2024-11-06T14:54:19Z) - ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs [13.608282497568108]
ClaimVer is a human-centric framework tailored to meet users' informational and verification needs.
It highlights each claim, verifies it against a trusted knowledge graph, and provides succinct, clear explanations for each claim prediction.
arXiv Detail & Related papers (2024-03-12T17:07:53Z) - Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models? [49.83570853386928]
Humans convey and pass down commonsense implicitly through stories.
This paper investigates the inherent commonsense ability of large language models (LLMs) expressed through stories.
arXiv Detail & Related papers (2024-02-22T07:55:26Z) - MORE: Multi-mOdal REtrieval Augmented Generative Commonsense Reasoning [66.06254418551737]
We propose a novel Multi-mOdal REtrieval framework to leverage both text and images to enhance the commonsense ability of language models.
Experiments on the Common-Gen task have demonstrated the efficacy of MORE based on the pre-trained models of both single and multiple modalities.
arXiv Detail & Related papers (2024-02-21T08:54:47Z) - Visually Grounded Commonsense Knowledge Acquisition [132.42003872906062]
Large-scale commonsense knowledge bases empower a broad range of AI applications.
Visual perception contains rich commonsense knowledge about real-world entities.
We present CLEVER, which formulates CKE as a distantly supervised multi-instance learning problem.
arXiv Detail & Related papers (2022-11-22T07:00:16Z) - ComFact: A Benchmark for Linking Contextual Commonsense Knowledge [31.19689856957576]
We propose the new task of commonsense fact linking, where models are given contexts and trained to identify situationally-relevant commonsense knowledge from KGs.
Our novel benchmark, ComFact, contains 293k in-context relevance annotations for commonsense across four stylistically diverse datasets.
arXiv Detail & Related papers (2022-10-23T09:30:39Z) - Dimensions of Commonsense Knowledge [60.49243784752026]
We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
arXiv Detail & Related papers (2021-01-12T17:52:39Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.