Common-Knowledge Concept Recognition for SEVA
- URL: http://arxiv.org/abs/2003.11687v1
- Date: Thu, 26 Mar 2020 00:30:36 GMT
- Title: Common-Knowledge Concept Recognition for SEVA
- Authors: Jitin Krishnan, Patrick Coronado, Hemant Purohit, and Huzefa Rangwala
- Abstract summary: We build a common-knowledge concept recognition system for a Systems Engineer's Virtual Assistant (SEVA)
The problem is formulated as a token classification task similar to named entity extraction.
We construct a dataset annotated at the word-level by carefully defining a labelling scheme to train a sequence model to recognize systems engineering concepts.
- Score: 15.124939896007472
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We build a common-knowledge concept recognition system for a Systems
Engineer's Virtual Assistant (SEVA) which can be used for downstream tasks such
as relation extraction, knowledge graph construction, and question-answering.
The problem is formulated as a token classification task similar to named
entity extraction. With the help of a domain expert and text processing
methods, we construct a dataset annotated at the word-level by carefully
defining a labelling scheme to train a sequence model to recognize systems
engineering concepts. We use a pre-trained language model and fine-tune it with
the labeled dataset of concepts. In addition, we also create some essential
datasets for information such as abbreviations and definitions from the systems
engineering domain. Finally, we construct a simple knowledge graph using these
extracted concepts along with some hyponym relations.
Related papers
- Customized Information and Domain-centric Knowledge Graph Construction with Large Language Models [0.0]
We propose a novel approach based on knowledge graphs to provide timely access to structured information.
Our framework encompasses a text mining process, which includes information retrieval, keyphrase extraction, semantic network creation, and topic map visualization.
We apply our methodology to the domain of automotive electrical systems to demonstrate the approach, which is scalable.
arXiv Detail & Related papers (2024-09-30T07:08:28Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Knowledge graphs for empirical concept retrieval [1.06378109904813]
Concept-based explainable AI is promising as a tool to improve the understanding of complex models at the premises of a given user.
Here, we present a workflow for user-driven data collection in both text and image domains.
We test the retrieved concept datasets on two concept-based explainability methods, namely concept activation vectors (CAVs) and concept activation regions (CARs)
arXiv Detail & Related papers (2024-04-10T13:47:22Z) - Model-Driven Engineering Method to Support the Formalization of Machine
Learning using SysML [0.0]
This work introduces a method supporting the collaborative definition of machine learning tasks by leveraging model-based engineering.
The method supports the identification and integration of various data sources, the required definition of semantic connections between data attributes, and the definition of data processing steps.
arXiv Detail & Related papers (2023-07-10T11:33:46Z) - UNTER: A Unified Knowledge Interface for Enhancing Pre-trained Language
Models [100.4659557650775]
We propose a UNified knowledge inTERface, UNTER, to provide a unified perspective to exploit both structured knowledge and unstructured knowledge.
With both forms of knowledge injected, UNTER gains continuous improvements on a series of knowledge-driven NLP tasks.
arXiv Detail & Related papers (2023-05-02T17:33:28Z) - DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for
Open-world Detection [118.36746273425354]
This paper presents a paralleled visual-concept pre-training method for open-world detection by resorting to knowledge enrichment from a designed concept dictionary.
By enriching the concepts with their descriptions, we explicitly build the relationships among various concepts to facilitate the open-domain learning.
The proposed framework demonstrates strong zero-shot detection performances, e.g., on the LVIS dataset, our DetCLIP-T outperforms GLIP-T by 9.9% mAP and obtains a 13.5% improvement on rare categories.
arXiv Detail & Related papers (2022-09-20T02:01:01Z) - Computing Rule-Based Explanations of Machine Learning Classifiers using
Knowledge Graphs [62.997667081978825]
We use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier.
In particular, we introduce a novel method for extracting and representing black-box explanations of its operation, in the form of first-order logic rules expressed in the terminology of the knowledge graph.
arXiv Detail & Related papers (2022-02-08T16:21:49Z) - Extracting Semantics from Maintenance Records [0.2578242050187029]
We develop three approaches to extracting named entity recognition from maintenance records.
We develop a syntactic rules and semantic-based approach and an approach leveraging a pre-trained language model.
Our evaluations on a real-world aviation maintenance records dataset show promising results.
arXiv Detail & Related papers (2021-08-11T21:23:10Z) - KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization
for Relation Extraction [111.74812895391672]
We propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt)
We inject latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.
arXiv Detail & Related papers (2021-04-15T17:57:43Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.