Doing Natural Language Processing in A Natural Way: An NLP toolkit based
on object-oriented knowledge base and multi-level grammar base
- URL: http://arxiv.org/abs/2105.05227v1
- Date: Tue, 11 May 2021 17:43:06 GMT
- Title: Doing Natural Language Processing in A Natural Way: An NLP toolkit based
on object-oriented knowledge base and multi-level grammar base
- Authors: Yu Guo
- Abstract summary: This toolkit focuses on semantic parsing, it also has abilities to discover new knowledge and grammar automatically.
New discovered knowledge and grammar will be identified by human, and will be used to update the knowledge base and grammar base.
- Score: 2.963359628667052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce an NLP toolkit based on object-oriented knowledge base and
multi-level grammar base. This toolkit focuses on semantic parsing, it also has
abilities to discover new knowledge and grammar automatically, new discovered
knowledge and grammar will be identified by human, and will be used to update
the knowledge base and grammar base. This process can be iterated many times to
improve the toolkit continuously.
Related papers
- Building Tamil Treebanks [0.0]
Treebanks are important linguistic resources, which are structured and annotated corpora with rich linguistic annotations.
This paper discusses the creation of Tamil treebanks using three distinct approaches: manual annotation, computational grammars, and machine learning techniques.
arXiv Detail & Related papers (2024-09-23T01:58:50Z) - Cross-Lingual Multi-Hop Knowledge Editing -- Benchmarks, Analysis and a Simple Contrastive Learning based Approach [53.028586843468915]
We propose the Cross-Lingual Multi-Hop Knowledge Editing paradigm, for measuring and analyzing the performance of various SoTA knowledge editing techniques in a cross-lingual setup.
Specifically, we create a parallel cross-lingual benchmark, CROLIN-MQUAKE for measuring the knowledge editing capabilities.
Following this, we propose a significantly improved system for cross-lingual multi-hop knowledge editing, CLEVER-CKE.
arXiv Detail & Related papers (2024-07-14T17:18:16Z) - A Novel Cartography-Based Curriculum Learning Method Applied on RoNLI: The First Romanian Natural Language Inference Corpus [71.77214818319054]
Natural language inference is a proxy for natural language understanding.
There is no publicly available NLI corpus for the Romanian language.
We introduce the first Romanian NLI corpus (RoNLI) comprising 58K training sentence pairs.
arXiv Detail & Related papers (2024-05-20T08:41:15Z) - CMULAB: An Open-Source Framework for Training and Deployment of Natural Language Processing Models [59.91221728187576]
This paper introduces the CMU Linguistic Linguistic Backend, an open-source framework that simplifies model deployment and continuous human-in-the-loop fine-tuning of NLP models.
CMULAB enables users to leverage the power of multilingual models to quickly adapt and extend existing tools for speech recognition, OCR, translation, and syntactic analysis to new languages.
arXiv Detail & Related papers (2024-04-03T02:21:46Z) - Automating Knowledge Acquisition for Content-Centric Cognitive Agents
Using LLMs [0.0]
The paper describes a system that uses large language model (LLM) technology to support the automatic learning of new entries in an intelligent agent's semantic lexicon.
The process is bootstrapped by an existing non-toy lexicon and a natural language generator that converts formal, ontologically-grounded representations of meaning into natural language sentences.
arXiv Detail & Related papers (2023-12-27T02:31:51Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - Knowledge Based Multilingual Language Model [44.70205282863062]
We present a novel framework to pretrain knowledge based multilingual language models (KMLMs)
We generate a large amount of code-switched synthetic sentences and reasoning-based multilingual training data using the Wikidata knowledge graphs.
Based on the intra- and inter-sentence structures of the generated data, we design pretraining tasks to facilitate knowledge learning.
arXiv Detail & Related papers (2021-11-22T02:56:04Z) - Natural Language Generation Using Link Grammar for General
Conversational Intelligence [0.0]
We propose a new technique to automatically generate grammatically valid sentences using the Link Grammar database.
This natural language generation method far outperforms current state-of-the-art baselines and may serve as the final component in a proto-AGI question answering pipeline.
arXiv Detail & Related papers (2021-04-19T06:16:07Z) - Meta-learning for fast cross-lingual adaptation in dependency parsing [16.716440467483096]
We apply model-agnostic meta-learning to the task of cross-lingual dependency parsing.
We find that meta-learning with pre-training can significantly improve upon the performance of language transfer.
arXiv Detail & Related papers (2021-04-10T11:10:16Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.