Natural Language Generation Using Link Grammar for General
Conversational Intelligence
- URL: http://arxiv.org/abs/2105.00830v1
- Date: Mon, 19 Apr 2021 06:16:07 GMT
- Title: Natural Language Generation Using Link Grammar for General
Conversational Intelligence
- Authors: Vignav Ramesh, Anton Kolonin
- Abstract summary: We propose a new technique to automatically generate grammatically valid sentences using the Link Grammar database.
This natural language generation method far outperforms current state-of-the-art baselines and may serve as the final component in a proto-AGI question answering pipeline.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many current artificial general intelligence (AGI) and natural language
processing (NLP) architectures do not possess general conversational
intelligence--that is, they either do not deal with language or are unable to
convey knowledge in a form similar to the human language without manual,
labor-intensive methods such as template-based customization. In this paper, we
propose a new technique to automatically generate grammatically valid sentences
using the Link Grammar database. This natural language generation method far
outperforms current state-of-the-art baselines and may serve as the final
component in a proto-AGI question answering pipeline that understandably
handles natural language material.
Related papers
- Putting Natural in Natural Language Processing [11.746833714322156]
The field of NLP has overwhelmingly focused on processing written rather than spoken language.
Recent advances in deep learning have led to a fortuitous convergence in methods between speech processing and mainstream NLP.
Truly natural language processing could lead to better integration with the rest of language science.
arXiv Detail & Related papers (2023-05-08T09:29:31Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - Benchmarking Language Models for Code Syntax Understanding [79.11525961219591]
Pre-trained language models have demonstrated impressive performance in both natural language processing and program understanding.
In this work, we perform the first thorough benchmarking of the state-of-the-art pre-trained models for identifying the syntactic structures of programs.
Our findings point out key limitations of existing pre-training methods for programming languages, and suggest the importance of modeling code syntactic structures.
arXiv Detail & Related papers (2022-10-26T04:47:18Z) - Linking Emergent and Natural Languages via Corpus Transfer [98.98724497178247]
We propose a novel way to establish a link by corpus transfer between emergent languages and natural languages.
Our approach showcases non-trivial transfer benefits for two different tasks -- language modeling and image captioning.
We also introduce a novel metric to predict the transferability of an emergent language by translating emergent messages to natural language captions grounded on the same images.
arXiv Detail & Related papers (2022-03-24T21:24:54Z) - Context-Tuning: Learning Contextualized Prompts for Natural Language
Generation [52.835877179365525]
We propose a novel continuous prompting approach, called Context-Tuning, to fine-tuning PLMs for natural language generation.
Firstly, the prompts are derived based on the input text, so that they can elicit useful knowledge from PLMs for generation.
Secondly, to further enhance the relevance of the generated text to the inputs, we utilize continuous inverse prompting to refine the process of natural language generation.
arXiv Detail & Related papers (2022-01-21T12:35:28Z) - LINDA: Unsupervised Learning to Interpolate in Natural Language
Processing [46.080523939647385]
"Learning to INterpolate for Data Augmentation" (LINDA) is an unsupervised learning approach to text for the purpose of data augmentation.
LINDA learns to interpolate between any pair of natural language sentences over a natural language manifold.
We show LINDA allows indeed us to seamlessly apply mixup in NLP and leads to better generalization in text classification both in-domain and out-of-domain.
arXiv Detail & Related papers (2021-12-28T02:56:41Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Learning Natural Language Generation from Scratch [25.984828046001013]
This paper introduces TRUncated ReinForcement Learning for Language (TrufLL)
It is an original ap-proach to train conditional language models from scratch by only using reinforcement learning (RL)
arXiv Detail & Related papers (2021-09-20T08:46:51Z) - Doing Natural Language Processing in A Natural Way: An NLP toolkit based
on object-oriented knowledge base and multi-level grammar base [2.963359628667052]
This toolkit focuses on semantic parsing, it also has abilities to discover new knowledge and grammar automatically.
New discovered knowledge and grammar will be identified by human, and will be used to update the knowledge base and grammar base.
arXiv Detail & Related papers (2021-05-11T17:43:06Z) - Conditioned Natural Language Generation using only Unconditioned
Language Model: An Exploration [8.623022983093444]
Transformer-based language models have shown to be very powerful for natural language generation (NLG)
We argue that the original unconditioned LM is sufficient for conditioned NLG.
We evaluated our approaches by the samples' fluency and diversity with automated and human evaluation.
arXiv Detail & Related papers (2020-11-14T17:45:11Z) - Automatic Extraction of Rules Governing Morphological Agreement [103.78033184221373]
We develop an automated framework for extracting a first-pass grammatical specification from raw text.
We focus on extracting rules describing agreement, a morphosyntactic phenomenon at the core of the grammars of many of the world's languages.
We apply our framework to all languages included in the Universal Dependencies project, with promising results.
arXiv Detail & Related papers (2020-10-02T18:31:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.