Effidit: Your AI Writing Assistant
- URL: http://arxiv.org/abs/2208.01815v2
- Date: Thu, 4 Aug 2022 12:13:43 GMT
- Title: Effidit: Your AI Writing Assistant
- Authors: Shuming Shi, Enbo Zhao, Duyu Tang, Yan Wang, Piji Li, Wei Bi, Haiyun
Jiang, Guoping Huang, Leyang Cui, Xinting Huang, Cong Zhou, Yong Dai,
Dongyang Ma
- Abstract summary: Effidit is a digital writing assistant that facilitates users to write higher-quality text more efficiently by using artificial intelligence (AI) technologies.
In Effidit, we significantly expand the capacities of a writing assistant by providing functions in five categories: text completion, error checking, text polishing, keywords to sentences (K2S), and cloud input methods (cloud IME)
- Score: 60.588370965898534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this technical report, we introduce Effidit (Efficient and Intelligent
Editing), a digital writing assistant that facilitates users to write
higher-quality text more efficiently by using artificial intelligence (AI)
technologies. Previous writing assistants typically provide the function of
error checking (to detect and correct spelling and grammatical errors) and
limited text-rewriting functionality. With the emergence of large-scale neural
language models, some systems support automatically completing a sentence or a
paragraph. In Effidit, we significantly expand the capacities of a writing
assistant by providing functions in five categories: text completion, error
checking, text polishing, keywords to sentences (K2S), and cloud input methods
(cloud IME). In the text completion category, Effidit supports generation-based
sentence completion, retrieval-based sentence completion, and phrase
completion. In contrast, many other writing assistants so far only provide one
or two of the three functions. For text polishing, we have three functions:
(context-aware) phrase polishing, sentence paraphrasing, and sentence
expansion, whereas many other writing assistants often support one or two
functions in this category. The main contents of this report include major
modules of Effidit, methods for implementing these modules, and evaluation
results of some key methods.
Related papers
- OmniParser: A Unified Framework for Text Spotting, Key Information Extraction and Table Recognition [79.852642726105]
We propose a unified paradigm for parsing visually-situated text across diverse scenarios.
Specifically, we devise a universal model, called Omni, which can simultaneously handle three typical visually-situated text parsing tasks.
In Omni, all tasks share the unified encoder-decoder architecture, the unified objective point-conditioned text generation, and the unified input representation.
arXiv Detail & Related papers (2024-03-28T03:51:14Z) - Speed Reading Tool Powered by Artificial Intelligence for Students with
ADHD, Dyslexia, or Short Attention Span [0.0]
This paper presents a novel approach to assist students with dyslexia, ADHD, and short attention span in digesting text-based information more efficiently.
The proposed solution utilizes the Multilayer Perceptron (MLP) algorithm for complex text processing and summarization tasks.
The paper discusses the methodology, implementation, and results of the AI-based speed reading tool.
arXiv Detail & Related papers (2023-07-26T23:47:14Z) - Are the Best Multilingual Document Embeddings simply Based on Sentence
Embeddings? [18.968571816913208]
We provide a systematic comparison of methods to produce document-level representations from sentences based on LASER, LaBSE, and Sentence BERT pre-trained multilingual models.
We show that a clever combination of sentence embeddings is usually better than encoding the full document as a single unit.
arXiv Detail & Related papers (2023-04-28T12:11:21Z) - Training Effective Neural Sentence Encoders from Automatically Mined
Paraphrases [0.0]
We propose a method for training effective language-specific sentence encoders without manually labeled data.
Our approach is to automatically construct a dataset of paraphrase pairs from sentence-aligned bilingual text corpora.
Our sentence encoder can be trained in less than a day on a single graphics card, achieving high performance on a diverse set of sentence-level tasks.
arXiv Detail & Related papers (2022-07-26T09:08:56Z) - CoAuthor: Designing a Human-AI Collaborative Writing Dataset for
Exploring Language Model Capabilities [92.79451009324268]
We present CoAuthor, a dataset designed for revealing GPT-3's capabilities in assisting creative and argumentative writing.
We demonstrate that CoAuthor can address questions about GPT-3's language, ideation, and collaboration capabilities.
We discuss how this work may facilitate a more principled discussion around LMs' promises and pitfalls in relation to interaction design.
arXiv Detail & Related papers (2022-01-18T07:51:57Z) - IGA : An Intent-Guided Authoring Assistant [37.98368621931934]
We leverage advances in language modeling to build an interactive writing assistant that generates and rephrases text according to author specifications.
Users provide input to our Intent-Guided Assistant (IGA) in the form of text interspersed with tags that correspond to specific rhetorical directives.
We fine-tune a language model on a datasetally-labeled with author intent, which allows IGA to fill in these tags with generated text that users can subsequently edit to their liking.
arXiv Detail & Related papers (2021-04-14T17:32:21Z) - Narrative Incoherence Detection [76.43894977558811]
We propose the task of narrative incoherence detection as a new arena for inter-sentential semantic understanding.
Given a multi-sentence narrative, decide whether there exist any semantic discrepancies in the narrative flow.
arXiv Detail & Related papers (2020-12-21T07:18:08Z) - Dialogue Generation on Infrequent Sentence Functions via Structured
Meta-Learning [94.38532755123323]
Sentence function is an important linguistic feature indicating the communicative purpose in uttering a sentence.
incorporating sentence functions into conversations has shown improvements in the quality of generated responses.
However, the number of utterances for different types of fine-grained sentence functions is extremely imbalanced.
arXiv Detail & Related papers (2020-10-04T07:13:36Z) - Enabling Language Models to Fill in the Blanks [81.59381915581892]
We present a simple approach for text infilling, the task of predicting missing spans of text at any position in a document.
We train (or fine-tune) off-the-shelf language models on sequences containing the concatenation of artificially-masked text and the text which was masked.
We show that this approach, which we call infilling by language modeling, can enable LMs to infill entire sentences effectively on three different domains: short stories, scientific abstracts, and lyrics.
arXiv Detail & Related papers (2020-05-11T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.