Using Sentiment Analysis to Investigate Peer Feedback by Native and Non-Native English Speakers
- URL: http://arxiv.org/abs/2507.22924v2
- Date: Thu, 07 Aug 2025 06:56:04 GMT
- Title: Using Sentiment Analysis to Investigate Peer Feedback by Native and Non-Native English Speakers
- Authors: Brittney Exline, Melanie Duffin, Brittany Harbison, Chrissa da Gomez, David Joyner,
- Abstract summary: This paper examines how native versus non-native English speaker status affects three metrics of peer feedback experience in online U.S.-based computing courses.<n>Results show that native English speakers rate feedback less favorably, while non-native speakers write more positively but receive less positive sentiment in return.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graduate-level CS programs in the U.S. increasingly enroll international students, with 60.2 percent of master's degrees in 2023 awarded to non-U.S. students. Many of these students take online courses, where peer feedback is used to engage students and improve pedagogy in a scalable manner. Since these courses are conducted in English, many students study in a language other than their first. This paper examines how native versus non-native English speaker status affects three metrics of peer feedback experience in online U.S.-based computing courses. Using the Twitter-roBERTa-based model, we analyze the sentiment of peer reviews written by and to a random sample of 500 students. We then relate sentiment scores and peer feedback ratings to students' language background. Results show that native English speakers rate feedback less favorably, while non-native speakers write more positively but receive less positive sentiment in return. When controlling for sex and age, significant interactions emerge, suggesting that language background plays a modest but complex role in shaping peer feedback experiences.
Related papers
- "You Cannot Sound Like GPT": Signs of language discrimination and resistance in computer science publishing [1.4579344926652844]
We examine how peer reviewers critique writing clarity.<n>We find significant bias against authors associated with institutions in countries where English is less widely spoken.<n>We see only a muted shift in the expression of this bias after the introduction of ChatGPT in late 2022.
arXiv Detail & Related papers (2025-05-12T23:58:41Z) - EDEN: Empathetic Dialogues for English learning [18.15602535467144]
Student passion and perseverance, or grit, has been associated with language learning success.
Recent work establishes that as students perceive their English teachers to be more supportive, their grit improves.
Our experiment suggests that using adaptive empathetic feedback leads to higher perceived affective support.
arXiv Detail & Related papers (2024-06-25T23:36:16Z) - Native Design Bias: Studying the Impact of English Nativeness on Language Model Performance [3.344876133162209]
Large Language Models (LLMs) excel at providing information acquired during pretraining on large-scale corpora.
This study investigates whether the quality of LLM responses varies depending on the demographic profile of users.
arXiv Detail & Related papers (2024-06-25T09:04:21Z) - Evaluation of ChatGPT Feedback on ELL Writers' Coherence and Cohesion [0.7028778922533686]
ChatGPT has had a transformative effect on education where students are using it to help with homework assignments and teachers are actively employing it in their teaching practices.
This study evaluated the quality of the feedback generated by ChatGPT regarding the coherence and cohesion of the essays written by English Language learners (ELLs) students.
arXiv Detail & Related papers (2023-10-10T10:25:56Z) - Can Language Models Learn to Listen? [96.01685069483025]
We present a framework for generating appropriate facial responses from a listener in dyadic social interactions based on the speaker's words.
Our approach autoregressively predicts a response of a listener: a sequence of listener facial gestures, quantized using a VQ-VAE.
We show that our generated listener motion is fluent and reflective of language semantics through quantitative metrics and a qualitative user study.
arXiv Detail & Related papers (2023-08-21T17:59:02Z) - Computational Language Acquisition with Theory of Mind [84.2267302901888]
We build language-learning agents equipped with Theory of Mind (ToM) and measure its effects on the learning process.
We find that training speakers with a highly weighted ToM listener component leads to performance gains in our image referential game setting.
arXiv Detail & Related papers (2023-03-02T18:59:46Z) - Training Language Models with Natural Language Feedback [51.36137482891037]
We learn from language feedback on model outputs using a three-step learning algorithm.
In synthetic experiments, we first evaluate whether language models accurately incorporate feedback to produce refinements.
Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization.
arXiv Detail & Related papers (2022-04-29T15:06:58Z) - A study on native American English speech recognition by Indian
listeners with varying word familiarity level [62.14295630922855]
We have three kinds of responses from each listener while they recognize an utterance.
From these transcriptions, word error rate (WER) is calculated and used as a metric to evaluate the similarity between the recognized and the original sentences.
Speaker nativity wise analysis shows that utterances from speakers of some nativity are more difficult to recognize by Indian listeners compared to few other nativities.
arXiv Detail & Related papers (2021-12-08T07:43:38Z) - Hocalarim: Mining Turkish Student Reviews [0.0]
We introduce Hocalarim (MyProfessors), the largest student review dataset available for the Turkish language.
It consists of over 5000 professor reviews left online by students, with different aspects of education rated on a scale of 1 to 5 stars.
We investigate the properties of the dataset and present its statistics.
arXiv Detail & Related papers (2021-09-06T09:55:58Z) - On Negative Interference in Multilingual Models: Findings and A
Meta-Learning Treatment [59.995385574274785]
We show that, contrary to previous belief, negative interference also impacts low-resource languages.
We present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference.
arXiv Detail & Related papers (2020-10-06T20:48:58Z) - Unsupervised Cross-lingual Representation Learning for Speech
Recognition [63.85924123692923]
XLSR learns cross-lingual speech representations by pretraining a single model from the raw waveform of speech in multiple languages.
We build on wav2vec 2.0 which is trained by solving a contrastive task over masked latent speech representations.
Experiments show that cross-lingual pretraining significantly outperforms monolingual pretraining.
arXiv Detail & Related papers (2020-06-24T18:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.