Fostering learners' self-regulation and collaboration skills and
strategies for mobile language learning beyond the classroom
- URL: http://arxiv.org/abs/2104.12486v1
- Date: Sat, 20 Mar 2021 15:57:59 GMT
- Title: Fostering learners' self-regulation and collaboration skills and
strategies for mobile language learning beyond the classroom
- Authors: Olga Viberg and Agnes Kukulska-Hulme
- Abstract summary: The chapter argues that support should focus on the development of two vital learning skills, namely being able to self-regulate and to collaborate effectively.
The ultimate aim is to enable the provision of individual adaptive learning paths to facilitate language learning beyond the classroom.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many language learners need to be supported in acquiring a second or foreign
language quickly and effectively across learning environments beyond the
classroom. The chapter argues that support should focus on the development of
two vital learning skills, namely being able to self-regulate and to
collaborate effectively in the learning process. We base our argumentation on
the theoretical lenses of self-regulated learning (SRL) and collaborative
learning in the context of mobile situated learning that can take place in a
variety of settings. The chapter examines a sample of selected empirical
studies within the field of mobile-assisted language learning with a twofold
aim. Firstly, the studies are analyzed in order to understand the role of
learner self-regulation and collaboration while acquiring a new language beyond
the classroom. Secondly, we aim to provide a deeper understanding of any
mechanisms provided to develop or support language learners' self-regulated and
collaborative learning skills. Finally, we propose that fostering SRL and
collaborative learning skills and strategies will benefit from recent advances
in the fields of learning analytics and artificial intelligence, coupled with
the use of mobile technologies and self-monitoring mechanisms. The ultimate aim
is to enable the provision of individual adaptive learning paths to facilitate
language learning beyond the classroom.
Related papers
- HC$^2$L: Hybrid and Cooperative Contrastive Learning for Cross-lingual Spoken Language Understanding [45.12153788010354]
State-of-the-art model for cross-lingual spoken language understanding performs cross-lingual unsupervised contrastive learning.
We propose Hybrid and Cooperative Contrastive Learning to address this problem.
arXiv Detail & Related papers (2024-05-10T02:40:49Z) - Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions [34.760230622675365]
Intelligent tutoring systems (ITSs) imitate human tutors and aim to provide customized instructions or feedback to learners.
With the emergence of generative artificial intelligence, large language models (LLMs) entitle the systems to complex and coherent conversational interactions.
We investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning.
arXiv Detail & Related papers (2024-04-04T13:22:28Z) - Tapping into the Natural Language System with Artificial Languages when
Learning Programming [7.5520627446611925]
The goal of this study is to investigate the feasibility of this idea, such that we can enhance learning programming by activating language learning mechanisms.
We observed that the training of the artificial language can be easily integrated into our curriculum.
However, within the context of our study, we did not find a significant benefit for programming competency when students learned an artificial language first.
arXiv Detail & Related papers (2024-01-12T07:08:55Z) - Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - Unleash Model Potential: Bootstrapped Meta Self-supervised Learning [12.57396771974944]
Long-term goal of machine learning is to learn general visual representations from a small amount of data without supervision.
Self-supervised learning and meta-learning are two promising techniques to achieve this goal, but they both only partially capture the advantages.
We propose a novel Bootstrapped Meta Self-Supervised Learning framework that aims to simulate the human learning process.
arXiv Detail & Related papers (2023-08-28T02:49:07Z) - Language Cognition and Language Computation -- Human and Machine
Language Understanding [51.56546543716759]
Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
arXiv Detail & Related papers (2023-01-12T02:37:00Z) - In-context Learning Distillation: Transferring Few-shot Learning Ability
of Pre-trained Language Models [55.78264509270503]
We introduce in-context learning distillation to transfer in-context few-shot learning ability from large models to smaller models.
We perform in-context learning distillation under two different few-shot learning paradigms: Meta In-context Tuning (Meta-ICT) and Multitask In-context Tuning (Multitask-ICT)
Our experiments and analysis reveal that in-context learning objectives and language modeling objectives are complementary under the Multitask-ICT paradigm.
arXiv Detail & Related papers (2022-12-20T22:11:35Z) - A Network Science Perspective to Personalized Learning [0.0]
We examine how learning objectives can be achieved through a learning platform that offers content choices and multiple modalities of engagement to support self-paced learning.
This framework brings the attention to learning experiences, rather than teaching experiences, by providing the learner engagement and content choices supported by a network of knowledge.
arXiv Detail & Related papers (2021-11-02T01:50:01Z) - Rethinking Supervised Learning and Reinforcement Learning in
Task-Oriented Dialogue Systems [58.724629408229205]
We demonstrate how traditional supervised learning and a simulator-free adversarial learning method can be used to achieve performance comparable to state-of-the-art RL-based methods.
Our main goal is not to beat reinforcement learning with supervised learning, but to demonstrate the value of rethinking the role of reinforcement learning and supervised learning in optimizing task-oriented dialogue systems.
arXiv Detail & Related papers (2020-09-21T12:04:18Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.