Unsupervised Dual Paraphrasing for Two-stage Semantic Parsing
- URL: http://arxiv.org/abs/2005.13485v3
- Date: Tue, 22 Dec 2020 11:45:03 GMT
- Title: Unsupervised Dual Paraphrasing for Two-stage Semantic Parsing
- Authors: Ruisheng Cao, Su Zhu, Chenyu Yang, Chen Liu, Rao Ma, Yanbin Zhao, Lu
Chen and Kai Yu
- Abstract summary: We propose a two-stage semantic parsing framework to reduce nontrivial human labor.
The first stage utilizes an unsupervised paraphrase model to convert an unlabeled natural language utterance into a canonical utterance.
The downstream naive semantic accepts the intermediate output and returns the target logical form.
- Score: 41.345662724584884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One daunting problem for semantic parsing is the scarcity of annotation.
Aiming to reduce nontrivial human labor, we propose a two-stage semantic
parsing framework, where the first stage utilizes an unsupervised paraphrase
model to convert an unlabeled natural language utterance into the canonical
utterance. The downstream naive semantic parser accepts the intermediate output
and returns the target logical form. Furthermore, the entire training process
is split into two phases: pre-training and cycle learning. Three tailored
self-supervised tasks are introduced throughout training to activate the
unsupervised paraphrase model. Experimental results on benchmarks Overnight and
GeoGranno demonstrate that our framework is effective and compatible with
supervised training.
Related papers
- Causal Unsupervised Semantic Segmentation [60.178274138753174]
Unsupervised semantic segmentation aims to achieve high-quality semantic grouping without human-labeled annotations.
We propose a novel framework, CAusal Unsupervised Semantic sEgmentation (CAUSE), which leverages insights from causal inference.
arXiv Detail & Related papers (2023-10-11T10:54:44Z) - Unsupervised Chunking with Hierarchical RNN [62.15060807493364]
This paper introduces an unsupervised approach to chunking, a syntactic task that involves grouping words in a non-hierarchical manner.
We present a two-layer Hierarchical Recurrent Neural Network (HRNN) designed to model word-to-chunk and chunk-to-sentence compositions.
Experiments on the CoNLL-2000 dataset reveal a notable improvement over existing unsupervised methods, enhancing phrase F1 score by up to 6 percentage points.
arXiv Detail & Related papers (2023-09-10T02:55:12Z) - Cascading and Direct Approaches to Unsupervised Constituency Parsing on
Spoken Sentences [67.37544997614646]
We present the first study on unsupervised spoken constituency parsing.
The goal is to determine the spoken sentences' hierarchical syntactic structure in the form of constituency parse trees.
We show that accurate segmentation alone may be sufficient to parse spoken sentences accurately.
arXiv Detail & Related papers (2023-03-15T17:57:22Z) - Phoneme Segmentation Using Self-Supervised Speech Models [13.956691231452336]
We apply transfer learning to the task of phoneme segmentation and demonstrate the utility of representations learned in self-supervised pre-training for the task.
Our model extends transformer-style encoders with strategically placed convolutions that manipulate features learned in pre-training.
arXiv Detail & Related papers (2022-11-02T19:57:31Z) - Guiding the PLMs with Semantic Anchors as Intermediate Supervision:
Towards Interpretable Semantic Parsing [57.11806632758607]
We propose to incorporate the current pretrained language models with a hierarchical decoder network.
By taking the first-principle structures as the semantic anchors, we propose two novel intermediate supervision tasks.
We conduct intensive experiments on several semantic parsing benchmarks and demonstrate that our approach can consistently outperform the baselines.
arXiv Detail & Related papers (2022-10-04T07:27:29Z) - Modelling continual learning in humans with Hebbian context gating and
exponentially decaying task signals [4.205692673448206]
Humans can learn several tasks in succession with minimal mutual interference but perform more poorly when trained on multiple tasks at once.
We propose novel computational constraints for artificial neural networks, that capture the cost of interleaved training and allow the network to learn two tasks in sequence without forgetting.
We found that the "sluggish" units introduce a switch-cost during training, which biases representations under interleaved training towards a joint representation that ignores the contextual cue, while the Hebbian step promotes the formation of a gating scheme from task units to the hidden layer that produces representations which are perfectly guarded against interference.
arXiv Detail & Related papers (2022-03-22T09:32:06Z) - Is Supervised Syntactic Parsing Beneficial for Language Understanding?
An Empirical Investigation [71.70562795158625]
Traditional NLP has long held (supervised) syntactic parsing necessary for successful higher-level semantic language understanding (LU)
Recent advent of end-to-end neural models, self-supervised via language modeling (LM), and their success on a wide range of LU tasks, questions this belief.
We empirically investigate the usefulness of supervised parsing for semantic LU in the context of LM-pretrained transformer networks.
arXiv Detail & Related papers (2020-08-15T21:03:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.