Few-Shot Table-to-Text Generation with Prototype Memory
- URL: http://arxiv.org/abs/2108.12516v2
- Date: Tue, 31 Aug 2021 11:02:49 GMT
- Title: Few-Shot Table-to-Text Generation with Prototype Memory
- Authors: Yixuan Su, Zaiqiao Meng, Simon Baker, Nigel Collier
- Abstract summary: We propose a new framework: Prototype-to-Generate (P2G), for table-to-text generation under the few-shot scenario.
The proposed framework utilizes the retrieved prototypes, which are jointly selected by an IR system and a novel prototype selector.
Experimental results on three benchmark datasets with three state-of-the-art models demonstrate that the proposed framework significantly improves the model performance.
- Score: 14.69889589370148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural table-to-text generation models have achieved remarkable progress on
an array of tasks. However, due to the data-hungry nature of neural models,
their performances strongly rely on large-scale training examples, limiting
their applicability in real-world applications. To address this, we propose a
new framework: Prototype-to-Generate (P2G), for table-to-text generation under
the few-shot scenario. The proposed framework utilizes the retrieved
prototypes, which are jointly selected by an IR system and a novel prototype
selector to help the model bridging the structural gap between tables and
texts. Experimental results on three benchmark datasets with three
state-of-the-art models demonstrate that the proposed framework significantly
improves the model performance across various evaluation metrics.
Related papers
- A Collaborative Ensemble Framework for CTR Prediction [73.59868761656317]
We propose a novel framework, Collaborative Ensemble Training Network (CETNet), to leverage multiple distinct models.
Unlike naive model scaling, our approach emphasizes diversity and collaboration through collaborative learning.
We validate our framework on three public datasets and a large-scale industrial dataset from Meta.
arXiv Detail & Related papers (2024-11-20T20:38:56Z) - Revisiting N-Gram Models: Their Impact in Modern Neural Networks for Handwritten Text Recognition [4.059708117119894]
This study addresses whether explicit language models, specifically n-gram models, still contribute to the performance of state-of-the-art deep learning architectures in the field of handwriting recognition.
We evaluate two prominent neural network architectures, PyLaia and DAN, with and without the integration of explicit n-gram language models.
The results show that incorporating character or subword n-gram models significantly improves the performance of ATR models on all datasets.
arXiv Detail & Related papers (2024-04-30T07:37:48Z) - Towards Robustness of Text-to-Visualization Translation against Lexical and Phrasal Variability [27.16741353384065]
Text-to-vis models often rely on lexical matching between words in the questions and tokens in data schemas.
In this study, we examine the robustness of current text-to-vis models, an area that has not previously been explored.
We propose a novel framework based on Retrieval-Augmented Generation (RAG) technique, named GRED, specifically designed to address input perturbations in two variants.
arXiv Detail & Related papers (2024-04-10T16:12:50Z) - Grounding and Enhancing Grid-based Models for Neural Fields [52.608051828300106]
This paper introduces a theoretical framework for grid-based models.
The framework points out that these models' approximation and generalization behaviors are determined by grid tangent kernels (GTK)
The introduced framework motivates the development of a novel grid-based model named the Multiplicative Fourier Adaptive Grid (MulFAGrid)
arXiv Detail & Related papers (2024-03-29T06:33:13Z) - A Two-Phase Recall-and-Select Framework for Fast Model Selection [13.385915962994806]
We propose a two-phase (coarse-recall and fine-selection) model selection framework.
It aims to enhance the efficiency of selecting a robust model by leveraging the models' training performances on benchmark datasets.
It has been demonstrated that the proposed methodology facilitates the selection of a high-performing model at a rate about 3x times faster than conventional baseline methods.
arXiv Detail & Related papers (2024-03-28T14:44:44Z) - Few-Shot Data-to-Text Generation via Unified Representation and
Multi-Source Learning [114.54944761345594]
We present a novel approach for structured data-to-text generation that addresses the limitations of existing methods.
Our proposed method aims to improve performance in multi-task training, zero-shot and few-shot scenarios.
arXiv Detail & Related papers (2023-08-10T03:09:12Z) - Controllable Text Generation with Neurally-Decomposed Oracle [91.18959622763055]
We propose a framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO)
We present a closed-form optimal solution to incorporate the token-level guidance into the base model for controllable generation.
arXiv Detail & Related papers (2022-05-27T20:17:53Z) - Plan-then-Generate: Controlled Data-to-Text Generation via Planning [11.127156275580305]
We propose a novel Plan-then-Generate (PlanGen) framework to improve the controllability of neural data-to-text models.
Our model is able to control both the intra-sentence and inter-sentence structure of the generated output.
arXiv Detail & Related papers (2021-08-31T10:53:32Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Learning Contextual Representations for Semantic Parsing with
Generation-Augmented Pre-Training [86.91380874390778]
We present Generation-Augmented Pre-training (GAP), that jointly learns representations of natural language utterances and table schemas by leveraging generation models to generate pre-train data.
Based on experimental results, neural semantics that leverage GAP MODEL obtain new state-of-the-art results on both SPIDER and CRITERIA-TO-generative benchmarks.
arXiv Detail & Related papers (2020-12-18T15:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.