Commonsense Knowledge Base Construction in the Age of Big Data
- URL: http://arxiv.org/abs/2105.01925v1
- Date: Wed, 5 May 2021 08:27:36 GMT
- Title: Commonsense Knowledge Base Construction in the Age of Big Data
- Authors: Simon Razniewski
- Abstract summary: We will showcase three systems for automated commonsense knowledge base construction.
We use Quasimodo to illustrate knowledge extraction systems engineering, Dice to illustrate the role that schema constraints play in cleaning fuzzy commonsense knowledge, and Ascent to illustrate the relevance of conceptual modelling.
- Score: 8.678138390075077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compiling commonsense knowledge is traditionally an AI topic approached by
manual labor. Recent advances in web data processing have enabled automated
approaches. In this demonstration we will showcase three systems for automated
commonsense knowledge base construction, highlighting each time one aspect of
specific interest to the data management community. (i) We use Quasimodo to
illustrate knowledge extraction systems engineering, (ii) Dice to illustrate
the role that schema constraints play in cleaning fuzzy commonsense knowledge,
and (iii) Ascent to illustrate the relevance of conceptual modelling. The demos
are available online at https://quasimodo.r2.enst.fr,
https://dice.mpi-inf.mpg.de and ascent.mpi-inf.mpg.de.
Related papers
- Understanding Generative AI Content with Embedding Models [4.662332573448995]
This work views the internal representations of modern deep neural networks (DNNs) as an automated form of traditional feature engineering.
We show that these embeddings can reveal interpretable, high-level concepts in unstructured sample data.
We find empirical evidence that there is inherent separability between real data and that generated from AI models.
arXiv Detail & Related papers (2024-08-19T22:07:05Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Scene-Driven Multimodal Knowledge Graph Construction for Embodied AI [2.380943129168748]
Embodied AI is one of the most popular studies in artificial intelligence and robotics.
Scene knowledge is important for an agent to understand the surroundings and make correct decisions.
Scene-MMKG construction method combines conventional knowledge engineering and large language models.
arXiv Detail & Related papers (2023-11-07T08:06:27Z) - Pathway toward prior knowledge-integrated machine learning in
engineering [1.3091722164946331]
This study emphasizes efforts to integrate multidisciplinary domain professions into machine acknowledgeable, data-driven processes.
This approach balances holist and reductionist perspectives in the engineering domain.
arXiv Detail & Related papers (2023-07-10T13:06:55Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Scaling Knowledge Graphs for Automating AI of Digital Twins [2.8693907332286996]
Digital Twins are digital representations of systems in the Internet of Things (IoT) that are often based on AI models that are trained on data from those systems.
We will discuss the unique requirements of applying semantic graphs to automate Digital Twins in different practical use cases.
arXiv Detail & Related papers (2022-10-26T10:12:10Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Machine Knowledge: Creation and Curation of Comprehensive Knowledge
Bases [28.856786775318486]
Large-scale knowledge bases, also known as knowledge graphs, have been automatically constructed from web contents and text sources.
This article surveys fundamental concepts and practical methods for creating and large knowledge bases.
arXiv Detail & Related papers (2020-09-24T09:28:13Z) - Common Sense or World Knowledge? Investigating Adapter-Based Knowledge
Injection into Pretrained Transformers [54.417299589288184]
We investigate models for complementing the distributional knowledge of BERT with conceptual knowledge from ConceptNet and its corresponding Open Mind Common Sense (OMCS) corpus.
Our adapter-based models substantially outperform BERT on inference tasks that require the type of conceptual knowledge explicitly present in ConceptNet and OMCS.
arXiv Detail & Related papers (2020-05-24T15:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.