Can Grammarly and ChatGPT accelerate language change? AI-powered technologies and their impact on the English language: wordiness vs. conciseness
- URL: http://arxiv.org/abs/2502.04324v1
- Date: Thu, 06 Feb 2025 18:59:26 GMT
- Title: Can Grammarly and ChatGPT accelerate language change? AI-powered technologies and their impact on the English language: wordiness vs. conciseness
- Authors: Karolina Rudnicka,
- Abstract summary: This paper investigates how Grammarly and ChatGPT affect the English language regarding wordiness vs. conciseness.
Both Grammarly and ChatGPT suggest more conciseness and less verbosity, even for relatively short sentences.
- Score: 0.0
- License:
- Abstract: The proliferation of NLP-powered language technologies, AI-based natural language generation models, and English as a mainstream means of communication among both native and non-native speakers make the output of AI-powered tools especially intriguing to linguists. This paper investigates how Grammarly and ChatGPT affect the English language regarding wordiness vs. conciseness. A case study focusing on the purpose subordinator in order to is presented to illustrate the way in which Grammarly and ChatGPT recommend shorter grammatical structures instead of longer and more elaborate ones. Although the analysed sentences were produced by native speakers, are perfectly correct, and were extracted from a language corpus of contemporary English, both Grammarly and ChatGPT suggest more conciseness and less verbosity, even for relatively short sentences. The present article argues that technologies such as Grammarly not only mirror language change but also have the potential to facilitate or accelerate it.
Related papers
- Principles of semantic and functional efficiency in grammatical patterning [1.6267479602370545]
Grammatical features such as number and gender serve two central functions in human languages.
Number and gender encode salient semantic attributes like numerosity and animacy, but offload sentence processing cost by predictably linking words together.
Grammars exhibit consistent organizational patterns across diverse languages, invariably rooted in a semantic foundation.
arXiv Detail & Related papers (2024-10-21T10:49:54Z) - Grammar Assistance Using Syntactic Structures (GAUSS) [10.526517571430709]
We propose a grammar coaching system for Spanish that relies on a rich linguistic formalism capable of giving informative feedback.
The approach is feasible for any language for which there is a computerized grammar.
arXiv Detail & Related papers (2024-06-26T13:35:10Z) - Task-Agnostic Low-Rank Adapters for Unseen English Dialects [52.88554155235167]
Large Language Models (LLMs) are trained on corpora disproportionally weighted in favor of Standard American English.
By disentangling dialect-specific and cross-dialectal information, HyperLoRA improves generalization to unseen dialects in a task-agnostic fashion.
arXiv Detail & Related papers (2023-11-02T01:17:29Z) - Crossing the Threshold: Idiomatic Machine Translation through Retrieval
Augmentation and Loss Weighting [66.02718577386426]
We provide a simple characterization of idiomatic translation and related issues.
We conduct a synthetic experiment revealing a tipping point at which transformer-based machine translation models correctly default to idiomatic translations.
To improve translation of natural idioms, we introduce two straightforward yet effective techniques.
arXiv Detail & Related papers (2023-10-10T23:47:25Z) - Playing with Words: Comparing the Vocabulary and Lexical Richness of
ChatGPT and Humans [3.0059120458540383]
generative language models such as ChatGPT have triggered a revolution that can transform how text is generated.
Will the use of tools such as ChatGPT increase or reduce the vocabulary used or the lexical richness?
This has implications for words, as those not included in AI-generated content will tend to be less and less popular and may eventually be lost.
arXiv Detail & Related papers (2023-08-14T21:19:44Z) - A Neural-Symbolic Approach Towards Identifying Grammatically Correct
Sentences [0.0]
It is commonly accepted that it is crucial to have access to well-written text from valid sources to tackle challenges like text summarization, question-answering, machine translation, or even pronoun resolution.
We present a simplified way to validate English sentences through a novel neural-symbolic approach.
arXiv Detail & Related papers (2023-07-16T13:21:44Z) - CLSE: Corpus of Linguistically Significant Entities [58.29901964387952]
We release a Corpus of Linguistically Significant Entities (CLSE) annotated by experts.
CLSE covers 74 different semantic types to support various applications from airline ticketing to video games.
We create a linguistically representative NLG evaluation benchmark in three languages: French, Marathi, and Russian.
arXiv Detail & Related papers (2022-11-04T12:56:12Z) - Code-Switching without Switching: Language Agnostic End-to-End Speech
Translation [68.8204255655161]
We treat speech recognition and translation as one unified end-to-end speech translation problem.
By training LAST with both input languages, we decode speech into one target language, regardless of the input language.
arXiv Detail & Related papers (2022-10-04T10:34:25Z) - VLGrammar: Grounded Grammar Induction of Vision and Language [86.88273769411428]
We study grounded grammar induction of vision and language in a joint learning framework.
We present VLGrammar, a method that uses compound probabilistic context-free grammars (compound PCFGs) to induce the language grammar and the image grammar simultaneously.
arXiv Detail & Related papers (2021-03-24T04:05:08Z) - Pragmatic information in translation: a corpus-based study of tense and
mood in English and German [70.3497683558609]
Grammatical tense and mood are important linguistic phenomena to consider in natural language processing (NLP) research.
We consider the correspondence between English and German tense and mood in translation.
Of particular importance is the challenge of modeling tense and mood in rule-based, phrase-based statistical and neural machine translation.
arXiv Detail & Related papers (2020-07-10T08:15:59Z) - Reinforcement learning of minimalist grammars [0.5862282909017474]
State-of-the-art language technology scans the acoustically analyzed speech signal for relevant keywords.
Words are then inserted into semantic slots to interpret the user's intent.
A mental lexicon must be acquired by a cognitive agent during interaction with its users.
arXiv Detail & Related papers (2020-04-30T14:25:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.