Fact-based Text Editing
- URL: http://arxiv.org/abs/2007.00916v1
- Date: Thu, 2 Jul 2020 06:50:30 GMT
- Title: Fact-based Text Editing
- Authors: Hayate Iso, Chao Qiao, Hang Li
- Abstract summary: textscFactEditor edits a draft text by referring to given facts using a buffer, a stream, and a memory.
textscFactEditor conducts inference faster than the encoder-decoder approach.
- Score: 11.115292572080131
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel text editing task, referred to as \textit{fact-based text
editing}, in which the goal is to revise a given document to better describe
the facts in a knowledge base (e.g., several triples). The task is important in
practice because reflecting the truth is a common requirement in text editing.
First, we propose a method for automatically generating a dataset for research
on fact-based text editing, where each instance consists of a draft text, a
revised text, and several facts represented in triples. We apply the method
into two public table-to-text datasets, obtaining two new datasets consisting
of 233k and 37k instances, respectively. Next, we propose a new neural network
architecture for fact-based text editing, called \textsc{FactEditor}, which
edits a draft text by referring to given facts using a buffer, a stream, and a
memory. A straightforward approach to address the problem would be to employ an
encoder-decoder model. Our experimental results on the two datasets show that
\textsc{FactEditor} outperforms the encoder-decoder approach in terms of
fidelity and fluency. The results also show that \textsc{FactEditor} conducts
inference faster than the encoder-decoder approach.
Related papers
- Leveraging Structure Knowledge and Deep Models for the Detection of Abnormal Handwritten Text [19.05500901000957]
We propose a two-stage detection algorithm that combines structure knowledge and deep models for handwritten text.
A shape regression network trained by a novel semi-supervised contrast training strategy is introduced and the positional relationship between the characters is fully employed.
Experiments on two handwritten text datasets show that the proposed method can greatly improve the detection performance.
arXiv Detail & Related papers (2024-10-15T14:57:10Z) - WikiIns: A High-Quality Dataset for Controlled Text Editing by Natural
Language Instruction [56.196512595940334]
We build and release WikiIns, a high-quality controlled text editing dataset with improved informativeness.
With the high-quality annotated dataset, we propose automatic approaches to generate a large-scale silver'' training set.
arXiv Detail & Related papers (2023-10-08T04:46:39Z) - FASTER: A Font-Agnostic Scene Text Editing and Rendering Framework [19.564048493848272]
Scene Text Editing (STE) is a challenging research problem, that primarily aims towards modifying existing texts in an image.
Existing style-transfer-based approaches have shown sub-par editing performance due to complex image backgrounds, diverse font attributes, and varying word lengths within the text.
We propose a novel font-agnostic scene text editing and rendering framework, named FASTER, for simultaneously generating text in arbitrary styles and locations.
arXiv Detail & Related papers (2023-08-05T15:54:06Z) - DreamEditor: Text-Driven 3D Scene Editing with Neural Fields [115.07896366760876]
We propose a novel framework that enables users to edit neural fields using text prompts.
DreamEditor generates highly realistic textures and geometry, significantly surpassing previous works in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2023-06-23T11:53:43Z) - Exploring Stroke-Level Modifications for Scene Text Editing [86.33216648792964]
Scene text editing (STE) aims to replace text with the desired one while preserving background and styles of the original text.
Previous methods of editing the whole image have to learn different translation rules of background and text regions simultaneously.
We propose a novel network by MOdifying Scene Text image at strokE Level (MOSTEL)
arXiv Detail & Related papers (2022-12-05T02:10:59Z) - Text Revision by On-the-Fly Representation Optimization [76.11035270753757]
Current state-of-the-art methods formulate these tasks as sequence-to-sequence learning problems.
We present an iterative in-place editing approach for text revision, which requires no parallel data.
It achieves competitive and even better performance than state-of-the-art supervised methods on text simplification.
arXiv Detail & Related papers (2022-04-15T07:38:08Z) - Assessing the Effectiveness of Syntactic Structure to Learn Code Edit
Representations [2.1793134762413433]
We use structural information from Abstract Syntax Tree (AST) to represent source code edits.
Inspired by the code2seq approach, we evaluate how using structural information from AST can help with the task of code edit classification.
arXiv Detail & Related papers (2021-06-11T01:23:07Z) - Learning Structural Edits via Incremental Tree Transformations [102.64394890816178]
We present a generic model for incremental editing of structured data (i.e., "structural edits")
Our editor learns to iteratively generate tree edits (e.g., deleting or adding a subtree) and applies them to the partially edited data.
We evaluate our proposed editor on two source code edit datasets, where results show that, with the proposed edit encoder, our editor significantly improves accuracy over previous approaches.
arXiv Detail & Related papers (2021-01-28T16:11:32Z) - Text Editing by Command [82.50904226312451]
A prevailing paradigm in neural text generation is one-shot generation, where text is produced in a single step.
We address this limitation with an interactive text generation setting in which the user interacts with the system by issuing commands to edit existing text.
We show that our Interactive Editor, a transformer-based model trained on this dataset, outperforms baselines and obtains positive results in both automatic and human evaluations.
arXiv Detail & Related papers (2020-10-24T08:00:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.