Materialized Knowledge Bases from Commonsense Transformers
- URL: http://arxiv.org/abs/2112.14815v1
- Date: Wed, 29 Dec 2021 20:22:05 GMT
- Title: Materialized Knowledge Bases from Commonsense Transformers
- Authors: Tuan-Phong Nguyen, Simon Razniewski
- Abstract summary: No materialized resource of commonsense knowledge generated this way is publicly available.
This paper fills this gap, and uses the materialized resources to perform a detailed analysis of the potential of this approach in terms of precision and recall.
We identify common problem cases, and outline use cases enabled by materialized resources.
- Score: 8.678138390075077
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Starting from the COMET methodology by Bosselut et al. (2019), generating
commonsense knowledge directly from pre-trained language models has recently
received significant attention. Surprisingly, up to now no materialized
resource of commonsense knowledge generated this way is publicly available.
This paper fills this gap, and uses the materialized resources to perform a
detailed analysis of the potential of this approach in terms of precision and
recall. Furthermore, we identify common problem cases, and outline use cases
enabled by materialized resources. We posit that the availability of these
resources is important for the advancement of the field, as it enables an
off-the-shelf-use of the resulting knowledge, as well as further analyses on
its strengths and weaknesses.
Related papers
- A Knowledge Plug-and-Play Test Bed for Open-domain Dialogue Generation [51.31429493814664]
We present a benchmark named multi-source Wizard of Wikipedia for evaluating multi-source dialogue knowledge selection and response generation.
We propose a new challenge, dialogue knowledge plug-and-play, which aims to test an already trained dialogue model on using new support knowledge from previously unseen sources.
arXiv Detail & Related papers (2024-03-06T06:54:02Z) - Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Joint Reasoning on Hybrid-knowledge sources for Task-Oriented Dialog [12.081212540168055]
We present a modified version of the MutliWOZ based dataset prepared by SeKnow to demonstrate how current methods have significant degradation in performance.
In line with recent work exploiting pre-trained language models, we fine-tune a BART based model using prompts for the tasks of querying knowledge sources.
We demonstrate that our model is robust to perturbations to knowledge modality (source of information) and that it can fuse information from structured as well as unstructured knowledge to generate responses.
arXiv Detail & Related papers (2022-10-13T18:49:59Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Embedding Knowledge for Document Summarization: A Survey [66.76415502727802]
Previous works proved that knowledge-embedded document summarizers excel at generating superior digests.
We propose novel to recapitulate knowledge and knowledge embeddings under the document summarization view.
arXiv Detail & Related papers (2022-04-24T04:36:07Z) - KAT: A Knowledge Augmented Transformer for Vision-and-Language [56.716531169609915]
We propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result on the open-domain multimodal task of OK-VQA.
Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation.
An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
arXiv Detail & Related papers (2021-12-16T04:37:10Z) - Dimensions of Commonsense Knowledge [60.49243784752026]
We survey a wide range of popular commonsense sources with a special focus on their relations.
We consolidate these relations into 13 knowledge dimensions, each abstracting over more specific relations found in sources.
arXiv Detail & Related papers (2021-01-12T17:52:39Z) - Improving Commonsense Question Answering by Graph-based Iterative
Retrieval over Multiple Knowledge Sources [26.256653692882715]
How to engage commonsense effectively in question answering systems is still under exploration.
We propose a novel question-answering method by integrating ConceptNet, Wikipedia, and the Cambridge Dictionary.
We use a pre-trained language model to encode the question, retrieved knowledge and choices, and propose an answer choice-aware attention mechanism.
arXiv Detail & Related papers (2020-11-05T08:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.