Facilitating reflection in teletandem through automatically generated
conversation metrics and playback video
- URL: http://arxiv.org/abs/2111.08788v2
- Date: Thu, 18 Nov 2021 13:49:01 GMT
- Title: Facilitating reflection in teletandem through automatically generated
conversation metrics and playback video
- Authors: Aparajita Dey-Plissonneau, Hyowon Lee, Michael Scriney, Alan F.
Smeaton, Vincent Pradier, Hamza Riaz
- Abstract summary: This pilot study focuses on a tool that allows second language (L2) learners to visualise and analyse their Zoom interactions with native speakers.
L2L uses the Zoom transcript to automatically generate conversation metrics and its playback feature with timestamps allows students to replay any chosen portion of the conversation for post-session reflection and self-review.
- Score: 4.014717876643502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This pilot study focuses on a tool called L2L that allows second language
(L2) learners to visualise and analyse their Zoom interactions with native
speakers. L2L uses the Zoom transcript to automatically generate conversation
metrics and its playback feature with timestamps allows students to replay any
chosen portion of the conversation for post-session reflection and self-review.
This exploratory study investigates a seven-week teletandem project, where
undergraduate students from an Irish University learning French (B2) interacted
with their peers from a French University learning English (B2+) via Zoom. The
data collected from a survey (N=43) and semi-structured interviews (N=35) show
that the quantitative conversation metrics and qualitative review of the
synchronous content helped raise students' confidence levels while engaging
with native speakers. Furthermore, it allowed them to set tangible goals to
improve their participation, and be more aware of what, why and how they are
learning.
Related papers
- Can LLMs Simulate L2-English Dialogue? An Information-Theoretic Analysis of L1-Dependent Biases [22.048949559200935]
This study evaluates Large Language Models' ability to simulate non-native-like English use observed in human second language (L2) learners.
In dialogue-based interviews, we prompt LLMs to mimic L2 English learners with specific L1s across seven languages.
Our analysis examines L1-driven linguistic biases, such as reference word usage and avoidance behaviors, using information-theoretic and distributional density measures.
arXiv Detail & Related papers (2025-02-20T12:34:46Z) - INTERACT: Enabling Interactive, Question-Driven Learning in Large Language Models [15.825663946923289]
Large language models (LLMs) excel at answering questions but remain passive learners--absorbing static data without the ability to question and refine knowledge.
This paper explores how LLMs can transition to interactive, question-driven learning through student-teacher dialogues.
arXiv Detail & Related papers (2024-12-16T02:28:53Z) - How Do Multilingual Language Models Remember Facts? [50.13632788453612]
We show that previously identified recall mechanisms in English largely apply to multilingual contexts.
We localize the role of language during recall, finding that subject enrichment is language-independent.
In decoder-only LLMs, FVs compose these two pieces of information in two separate stages.
arXiv Detail & Related papers (2024-10-18T11:39:34Z) - Multilingual Needle in a Haystack: Investigating Long-Context Behavior of Multilingual Large Language Models [22.859955360764275]
We introduce the MultiLingual Needle-in-a-Haystack (MLNeedle) test to assess a model's ability to retrieve relevant information.
We evaluate four state-of-the-art large language models on MLNeedle.
arXiv Detail & Related papers (2024-08-19T17:02:06Z) - A Tale of Two Languages: Large-Vocabulary Continuous Sign Language Recognition from Spoken Language Supervision [74.972172804514]
We introduce a multi-task Transformer model, CSLR2, that is able to ingest a signing sequence and output in a joint embedding space between signed language and spoken language text.
New dataset annotations provide continuous sign-level annotations for six hours of test videos, and will be made publicly available.
Our model significantly outperforms the previous state of the art on both tasks.
arXiv Detail & Related papers (2024-05-16T17:19:06Z) - TouchStone: Evaluating Vision-Language Models by Language Models [91.69776377214814]
We propose an evaluation method that uses strong large language models as judges to comprehensively evaluate the various abilities of LVLMs.
We construct a comprehensive visual dialogue dataset TouchStone, consisting of open-world images and questions, covering five major categories of abilities and 27 subtasks.
We demonstrate that powerful LVLMs, such as GPT-4, can effectively score dialogue quality by leveraging their textual capabilities alone.
arXiv Detail & Related papers (2023-08-31T17:52:04Z) - AutoConv: Automatically Generating Information-seeking Conversations
with Large Language Models [74.10293412011455]
We propose AutoConv for synthetic conversation generation.
Specifically, we formulate the conversation generation problem as a language modeling task.
We finetune an LLM with a few human conversations to capture the characteristics of the information-seeking process.
arXiv Detail & Related papers (2023-08-12T08:52:40Z) - Efficiently Aligned Cross-Lingual Transfer Learning for Conversational
Tasks using Prompt-Tuning [98.60739735409243]
Cross-lingual transfer of language models trained on high-resource languages like English has been widely studied for many NLP tasks.
We introduce XSGD for cross-lingual alignment pretraining, a parallel and large-scale multilingual conversation dataset.
To facilitate aligned cross-lingual representations, we develop an efficient prompt-tuning-based method for learning alignment prompts.
arXiv Detail & Related papers (2023-04-03T18:46:01Z) - Analysis of Individual Conversational Volatility in Tandem
Telecollaboration for Second Language Learning [7.767962338247332]
Students are grouped into video conference calls while learning the native language of other student(s) on the calls.
This places students in an online environment where the more outgoing can actively contribute and engage in dialogue.
We have built and deployed the L2L system which records timings of conversational utterances from all participants in a call.
We present an analysis of conversational volatility measures for a sample of 19 individual English-speaking students from our University who are learning Frenchm, in each of 86 tandem telecollaboration calls over one teaching semester.
arXiv Detail & Related papers (2022-06-28T12:34:00Z) - Watch and Learn: Mapping Language and Noisy Real-world Videos with
Self-supervision [54.73758942064708]
We teach machines to understand visuals and natural language by learning the mapping between sentences and noisy video snippets without explicit annotations.
For training and evaluation, we contribute a new dataset ApartmenTour' that contains a large number of online videos and subtitles.
arXiv Detail & Related papers (2020-11-19T03:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.