Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation
in Few Shots
- URL: http://arxiv.org/abs/2203.00732v1
- Date: Tue, 1 Mar 2022 20:37:20 GMT
- Title: Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation
in Few Shots
- Authors: Wenting Zhao, Ye Liu, Yao Wan, Philip S. Yu
- Abstract summary: Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data.
This paper proposes a novel approach, Memorize and Generate (called AMG), inspired by the text generation process of humans.
- Score: 58.404516361586325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot table-to-text generation is a task of composing fluent and faithful
sentences to convey table content using limited data. Despite many efforts
having been made towards generating impressive fluent sentences by fine-tuning
powerful pre-trained language models, the faithfulness of generated content
still needs to be improved. To this end, this paper proposes a novel approach
Attend, Memorize and Generate (called AMG), inspired by the text generation
process of humans. In particular, AMG (1) attends over the multi-granularity of
context using a novel strategy based on table slot level and traditional
token-by-token level attention to exploit both the table structure and natural
linguistic information; (2) dynamically memorizes the table slot allocation
states; and (3) generates faithful sentences according to both the context and
memory allocation states. Comprehensive experiments with human evaluation on
three domains (i.e., humans, songs, and books) of the Wiki dataset show that
our model can generate higher qualified texts when compared with several
state-of-the-art baselines, in both fluency and faithfulness.
Related papers
- Exploration of Masked and Causal Language Modelling for Text Generation [6.26998839917804]
This paper conducts an extensive comparison of Causal Language Modelling approaches for text generation tasks.
We first employ quantitative metrics and then perform a qualitative human evaluation to analyse coherence and grammatical correctness.
The results show that consistently outperforms CLM in text generation across all datasets.
arXiv Detail & Related papers (2024-05-21T09:33:31Z) - Adapting Knowledge for Few-shot Table-to-Text Generation [35.59842534346997]
We propose a novel framework: Adapt-Knowledge-to-Generate (AKG)
AKG adapts unlabeled domain-specific knowledge into the model, which brings at least three benefits.
Our model achieves superior performance in terms of both fluency and accuracy as judged by human and automatic evaluations.
arXiv Detail & Related papers (2023-02-24T05:48:53Z) - Few-Shot Table-to-Text Generation with Prompt Planning and Knowledge
Memorization [41.20314472839442]
We suggest a new framework: PromptMize, which targets table-to-text generation under few-shot settings.
The design of our framework consists of two aspects: a prompt planner and a knowledge adapter.
Our model achieves remarkable performance in generating quality as judged by human and automatic evaluations.
arXiv Detail & Related papers (2023-02-09T03:04:11Z) - Towards Table-to-Text Generation with Pretrained Language Model: A Table
Structure Understanding and Text Deliberating Approach [60.03002572791552]
We propose a table structure understanding and text deliberating approach, namely TASD.
Specifically, we devise a three-layered multi-head attention network to realize the table-structure-aware text generation model.
Our approach can generate faithful and fluent descriptive texts for different types of tables.
arXiv Detail & Related papers (2023-01-05T14:03:26Z) - On Advances in Text Generation from Images Beyond Captioning: A Case
Study in Self-Rationalization [89.94078728495423]
We show that recent advances in each modality, CLIP image representations and scaling of language models, do not consistently improve multimodal self-rationalization of tasks with multimodal inputs.
Our findings call for a backbone modelling approach that can be built on to advance text generation from images and text beyond image captioning.
arXiv Detail & Related papers (2022-05-24T00:52:40Z) - Data-to-text Generation with Variational Sequential Planning [74.3955521225497]
We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input.
We propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way.
We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation.
arXiv Detail & Related papers (2022-02-28T13:17:59Z) - Long Text Generation by Modeling Sentence-Level and Discourse-Level
Coherence [59.51720326054546]
We propose a long text generation model, which can represent the prefix sentences at sentence level and discourse level in the decoding process.
Our model can generate more coherent texts than state-of-the-art baselines.
arXiv Detail & Related papers (2021-05-19T07:29:08Z) - Towards Faithful Neural Table-to-Text Generation with Content-Matching
Constraints [63.84063384518667]
We propose a novel Transformer-based generation framework to achieve the goal.
Core techniques in our method to enforce faithfulness include a new table-text optimal-transport matching loss.
To evaluate faithfulness, we propose a new automatic metric specialized to the table-to-text generation problem.
arXiv Detail & Related papers (2020-05-03T02:54:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.