Stochastic Natural Language Generation Using Dependency Information
- URL: http://arxiv.org/abs/2001.03897v1
- Date: Sun, 12 Jan 2020 09:40:11 GMT
- Title: Stochastic Natural Language Generation Using Dependency Information
- Authors: Elham Seifossadat and Hossein Sameti
- Abstract summary: This article presents a corpus-based model for generating natural language text.
Our model encodes dependency relations from training data through a feature set, then produces a new dependency tree for a given meaning representation.
We show that our model produces high-quality utterances in aspects of informativeness and naturalness as well as quality.
- Score: 0.7995360025953929
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article presents a stochastic corpus-based model for generating natural
language text. Our model first encodes dependency relations from training data
through a feature set, then concatenates these features to produce a new
dependency tree for a given meaning representation, and finally generates a
natural language utterance from the produced dependency tree. We test our model
on nine domains from tabular, dialogue act and RDF format. Our model
outperforms the corpus-based state-of-the-art methods trained on tabular
datasets and also achieves comparable results with neural network-based
approaches trained on dialogue act, E2E and WebNLG datasets for BLEU and ERR
evaluation metrics. Also, by reporting Human Evaluation results, we show that
our model produces high-quality utterances in aspects of informativeness and
naturalness as well as quality.
Related papers
- Relation-based Counterfactual Data Augmentation and Contrastive Learning for Robustifying Natural Language Inference Models [0.0]
We propose a method in which we use token-based and sentence-based augmentation methods to generate counterfactual sentence pairs.
We show that the proposed method can improve the performance and robustness of the NLI model.
arXiv Detail & Related papers (2024-10-28T03:43:25Z) - Transparency at the Source: Evaluating and Interpreting Language Models
With Access to the True Distribution [4.01799362940916]
We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data.
The data is generated using a massive probabilistic grammar, that is itself derived from a large natural language corpus.
With access to the underlying true source, our results show striking differences and outcomes in learning dynamics between different classes of words.
arXiv Detail & Related papers (2023-10-23T12:03:01Z) - Multi-Scales Data Augmentation Approach In Natural Language Inference
For Artifacts Mitigation And Pre-Trained Model Optimization [0.0]
We provide a variety of techniques for analyzing and locating dataset artifacts inside the crowdsourced Stanford Natural Language Inference corpus.
To mitigate dataset artifacts, we employ a unique multi-scale data augmentation technique with two distinct frameworks.
Our combination method enhances our model's resistance to perturbation testing, enabling it to continuously outperform the pre-trained baseline.
arXiv Detail & Related papers (2022-12-16T23:37:44Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - WANLI: Worker and AI Collaboration for Natural Language Inference
Dataset Creation [101.00109827301235]
We introduce a novel paradigm for dataset creation based on human and machine collaboration.
We use dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instruct GPT-3 to compose new examples with similar patterns.
The resulting dataset, WANLI, consists of 108,357 natural language inference (NLI) examples that present unique empirical strengths.
arXiv Detail & Related papers (2022-01-16T03:13:49Z) - NL-Augmenter: A Framework for Task-Sensitive Natural Language
Augmentation [91.97706178867439]
We present NL-Augmenter, a new participatory Python-based natural language augmentation framework.
We describe the framework and an initial set of 117 transformations and 23 filters for a variety of natural language tasks.
We demonstrate the efficacy of NL-Augmenter by using several of its transformations to analyze the robustness of popular natural language models.
arXiv Detail & Related papers (2021-12-06T00:37:59Z) - Language Model Evaluation Beyond Perplexity [47.268323020210175]
We analyze whether text generated from language models exhibits the statistical tendencies present in the human-generated text on which they were trained.
We find that neural language models appear to learn only a subset of the tendencies considered, but align much more closely with empirical trends than proposed theoretical distributions.
arXiv Detail & Related papers (2021-05-31T20:13:44Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Exploiting Syntactic Structure for Better Language Modeling: A Syntactic
Distance Approach [78.77265671634454]
We make use of a multi-task objective, i.e., the models simultaneously predict words as well as ground truth parse trees in a form called "syntactic distances"
Experimental results on the Penn Treebank and Chinese Treebank datasets show that when ground truth parse trees are provided as additional training signals, the model is able to achieve lower perplexity and induce trees with better quality.
arXiv Detail & Related papers (2020-05-12T15:35:00Z) - Unnatural Language Processing: Bridging the Gap Between Synthetic and
Natural Language Data [37.542036032277466]
We introduce a technique for -simulation-to-real'' transfer in language understanding problems.
Our approach matches or outperforms state-of-the-art models trained on natural language data in several domains.
arXiv Detail & Related papers (2020-04-28T16:41:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.