The Impact of Multiple Parallel Phrase Suggestions on Email Input and
Composition Behaviour of Native and Non-Native English Writers
- URL: http://arxiv.org/abs/2101.09157v1
- Date: Fri, 22 Jan 2021 15:32:32 GMT
- Title: The Impact of Multiple Parallel Phrase Suggestions on Email Input and
Composition Behaviour of Native and Non-Native English Writers
- Authors: Daniel Buschek, Martin Z\"urn, Malin Eiband
- Abstract summary: We build a text editor prototype with a neural language model (GPT-2), refined in a prestudy with 30 people.
In an online study (N=156), people composed emails in four conditions (0/1/3/6 parallel suggestions)
Our results reveal (1) benefits for ideation, and costs for efficiency, when suggesting multiple phrases; (2) that non-native speakers benefit more from more suggestions; and (3) further insights into behaviour patterns.
- Score: 15.621144215664767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an in-depth analysis of the impact of multi-word suggestion
choices from a neural language model on user behaviour regarding input and text
composition in email writing. Our study for the first time compares different
numbers of parallel suggestions, and use by native and non-native English
writers, to explore a trade-off of "efficiency vs ideation", emerging from
recent literature. We built a text editor prototype with a neural language
model (GPT-2), refined in a prestudy with 30 people. In an online study
(N=156), people composed emails in four conditions (0/1/3/6 parallel
suggestions). Our results reveal (1) benefits for ideation, and costs for
efficiency, when suggesting multiple phrases; (2) that non-native speakers
benefit more from more suggestions; and (3) further insights into behaviour
patterns. We discuss implications for research, the design of interactive
suggestion systems, and the vision of supporting writers with AI instead of
replacing them.
Related papers
- Multi-turn Dialogue Comprehension from a Topic-aware Perspective [70.37126956655985]
This paper proposes to model multi-turn dialogues from a topic-aware perspective.
We use a dialogue segmentation algorithm to split a dialogue passage into topic-concentrated fragments in an unsupervised way.
We also present a novel model, Topic-Aware Dual-Attention Matching (TADAM) Network, which takes topic segments as processing elements.
arXiv Detail & Related papers (2023-09-18T11:03:55Z) - Disco-Bench: A Discourse-Aware Evaluation Benchmark for Language
Modelling [70.23876429382969]
We propose a benchmark that can evaluate intra-sentence discourse properties across a diverse set of NLP tasks.
Disco-Bench consists of 9 document-level testsets in the literature domain, which contain rich discourse phenomena.
For linguistic analysis, we also design a diagnostic test suite that can examine whether the target models learn discourse knowledge.
arXiv Detail & Related papers (2023-07-16T15:18:25Z) - A Neural-Symbolic Approach Towards Identifying Grammatically Correct
Sentences [0.0]
It is commonly accepted that it is crucial to have access to well-written text from valid sources to tackle challenges like text summarization, question-answering, machine translation, or even pronoun resolution.
We present a simplified way to validate English sentences through a novel neural-symbolic approach.
arXiv Detail & Related papers (2023-07-16T13:21:44Z) - SciMON: Scientific Inspiration Machines Optimized for Novelty [68.46036589035539]
We explore and enhance the ability of neural language models to generate novel scientific directions grounded in literature.
We take a dramatic departure with a novel setting in which models use as input background contexts.
We present SciMON, a modeling framework that uses retrieval of "inspirations" from past scientific papers.
arXiv Detail & Related papers (2023-05-23T17:12:08Z) - Prompting Large Language Model for Machine Translation: A Case Study [87.88120385000666]
We offer a systematic study on prompting strategies for machine translation.
We examine factors for prompt template and demonstration example selection.
We explore the use of monolingual data and the feasibility of cross-lingual, cross-domain, and sentence-to-document transfer learning.
arXiv Detail & Related papers (2023-01-17T18:32:06Z) - Studying writer-suggestion interaction: A qualitative study to
understand writer interaction with aligned/misaligned next-phrase suggestion [3.068049762564199]
We present an exploratory qualitative study to understand how writers interact with next-phrase suggestions.
We conducted a study where amateur writers were asked to write two movie reviews each.
We found writers interact with next-phrase suggestions in various complex ways.
arXiv Detail & Related papers (2022-08-01T06:49:07Z) - 1Cademy at Semeval-2022 Task 1: Investigating the Effectiveness of
Multilingual, Multitask, and Language-Agnostic Tricks for the Reverse
Dictionary Task [13.480318097164389]
We focus on the Reverse Dictionary Track of the SemEval2022 task of matching dictionary glosses to word embeddings.
Models convert the input of sentences to three types of embeddings: SGNS, Char, and Electra.
Our proposed Elmobased monolingual model achieves the highest outcome.
arXiv Detail & Related papers (2022-06-08T06:39:04Z) - Sparks: Inspiration for Science Writing using Language Models [11.38723572165938]
We present a system for generating "sparks", sentences related to a scientific concept intended to inspire writers.
We find that our sparks are more coherent and diverse than a competitive language model baseline, and approach a human-created gold standard.
arXiv Detail & Related papers (2021-10-14T18:03:11Z) - Sense representations for Portuguese: experiments with sense embeddings
and deep neural language models [0.0]
Unsupervised sense representations can induce different senses of a word by analyzing its contextual semantics in a text.
We present the first experiments carried out for generating sense embeddings for Portuguese.
arXiv Detail & Related papers (2021-08-31T18:07:01Z) - Read Like Humans: Autonomous, Bidirectional and Iterative Language
Modeling for Scene Text Recognition [80.446770909975]
Linguistic knowledge is of great benefit to scene text recognition.
How to effectively model linguistic rules in end-to-end deep networks remains a research challenge.
We propose an autonomous, bidirectional and iterative ABINet for scene text recognition.
arXiv Detail & Related papers (2021-03-11T06:47:45Z) - Prototype-to-Style: Dialogue Generation with Style-Aware Editing on
Retrieval Memory [65.98002918470543]
We introduce a new prototype-to-style framework to tackle the challenge of stylistic dialogue generation.
The framework uses an Information Retrieval (IR) system and extracts a response prototype from the retrieved response.
A stylistic response generator then takes the prototype and the desired language style as model input to obtain a high-quality and stylistic response.
arXiv Detail & Related papers (2020-04-05T14:36:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.