Logic-Consistency Text Generation from Semantic Parses
- URL: http://arxiv.org/abs/2108.00577v1
- Date: Mon, 2 Aug 2021 01:12:18 GMT
- Title: Logic-Consistency Text Generation from Semantic Parses
- Authors: Chang Shu, Yusen Zhang, Xiangyu Dong, Peng Shi, Tao Yu, Rui Zhang
- Abstract summary: This paper first proposes SNOWBALL, a framework for logic consistent text generation from semantic parses.
Second, we propose a novel automatic metric, BLEC, for evaluating the logical consistency between the semantic parses and generated texts.
- Score: 32.543257899910216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text generation from semantic parses is to generate textual descriptions for
formal representation inputs such as logic forms and SQL queries. This is
challenging due to two reasons: (1) the complex and intensive inner logic with
the data scarcity constraint, (2) the lack of automatic evaluation metrics for
logic consistency. To address these two challenges, this paper first proposes
SNOWBALL, a framework for logic consistent text generation from semantic parses
that employs an iterative training procedure by recursively augmenting the
training set with quality control. Second, we propose a novel automatic metric,
BLEC, for evaluating the logical consistency between the semantic parses and
generated texts. The experimental results on two benchmark datasets, Logic2Text
and Spider, demonstrate the SNOWBALL framework enhances the logic consistency
on both BLEC and human evaluation. Furthermore, our statistical analysis
reveals that BLEC is more logically consistent with human evaluation than
general-purpose automatic metrics including BLEU, ROUGE and, BLEURT. Our data
and code are available at https://github.com/Ciaranshu/relogic.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - MURMUR: Modular Multi-Step Reasoning for Semi-Structured Data-to-Text
Generation [102.20036684996248]
We propose MURMUR, a neuro-symbolic modular approach to text generation from semi-structured data with multi-step reasoning.
We conduct experiments on two data-to-text generation tasks like WebNLG and LogicNLG.
arXiv Detail & Related papers (2022-12-16T17:36:23Z) - Investigating the Robustness of Natural Language Generation from Logical
Forms via Counterfactual Samples [30.079030298066847]
State-of-the-art methods based on pre-trained models have achieved remarkable performance on the standard test dataset.
We question whether these methods really learn how to perform logical reasoning, rather than just relying on the spurious correlations between the headers of the tables and operators of the logical form.
We propose two approaches to reduce the model's reliance on the shortcut.
arXiv Detail & Related papers (2022-10-16T14:14:53Z) - The Whole Truth and Nothing But the Truth: Faithful and Controllable
Dialogue Response Generation with Dataflow Transduction and Constrained
Decoding [65.34601470417967]
We describe a hybrid architecture for dialogue response generation that combines the strengths of neural language modeling and rule-based generation.
Our experiments show that this system outperforms both rule-based and learned approaches in human evaluations of fluency, relevance, and truthfulness.
arXiv Detail & Related papers (2022-09-16T09:00:49Z) - PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation [44.78200830757109]
We propose a PLOG (Pretrained Logical Form Generator) framework to improve the generation fidelity.
PLOG is first pretrained on a table-to-logic-form generation task, then finetuned on downstream table-to-text tasks.
PLOG can learn logical inference from table-logic pairs much more definitely than from table-text pairs.
arXiv Detail & Related papers (2022-05-25T11:55:54Z) - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning [63.50909998372667]
We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
arXiv Detail & Related papers (2022-03-01T11:13:00Z) - Improving Logical-Level Natural Language Generation with
Topic-Conditioned Data Augmentation and Logical Form Generation [18.93964332724296]
We propose a topic-conditioned data augmentation (TopicDA) to generate logical forms and textual descriptions directly from tables.
We introduce logical form generation (LG), a dual task of Logic2text that requires generating a valid logical form based on a text description of a table.
We also propose a semi-supervised learning approach to jointly train a Logic2text and an LG model with both labeled and augmented data.
arXiv Detail & Related papers (2021-12-12T13:50:18Z) - LOGEN: Few-shot Logical Knowledge-Conditioned Text Generation with
Self-training [76.90793623822866]
We propose a unified framework for logical knowledge-conditioned text generation in the few-shot setting.
Our approach leverages self-training and samples pseudo logical forms based on content and structure consistency.
arXiv Detail & Related papers (2021-12-02T16:49:41Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.