A Large-Scale, Open-Domain, Mixed-Interface Dialogue-Based ITS for STEM
- URL: http://arxiv.org/abs/2005.06616v1
- Date: Wed, 6 May 2020 02:45:43 GMT
- Title: A Large-Scale, Open-Domain, Mixed-Interface Dialogue-Based ITS for STEM
- Authors: Iulian Vlad Serban, Varun Gupta, Ekaterina Kochmar, Dung D. Vu, Robert
Belfer, Joelle Pineau, Aaron Courville, Laurent Charlin, Yoshua Bengio
- Abstract summary: Korbit is a large-scale, open-domain, mixed-interface, dialogue-based intelligent tutoring system (ITS)
It uses machine learning, natural language processing and reinforcement learning to provide interactive, personalized learning online.
Unlike other ITS, a teacher can develop new learning modules for Korbit in a matter of hours.
- Score: 84.60813413413402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Korbit, a large-scale, open-domain, mixed-interface,
dialogue-based intelligent tutoring system (ITS). Korbit uses machine learning,
natural language processing and reinforcement learning to provide interactive,
personalized learning online. Korbit has been designed to easily scale to
thousands of subjects, by automating, standardizing and simplifying the content
creation process. Unlike other ITS, a teacher can develop new learning modules
for Korbit in a matter of hours. To facilitate learning across a widerange of
STEM subjects, Korbit uses a mixed-interface, which includes videos,
interactive dialogue-based exercises, question-answering, conceptual diagrams,
mathematical exercises and gamification elements. Korbit has been built to
scale to millions of students, by utilizing a state-of-the-art cloud-based
micro-service architecture. Korbit launched its first course in 2019 on machine
learning, and since then over 7,000 students have enrolled. Although Korbit was
designed to be open-domain and highly scalable, A/B testing experiments with
real-world students demonstrate that both student learning outcomes and student
motivation are substantially improved compared to typical online courses.
Related papers
- Ruffle&Riley: Insights from Designing and Evaluating a Large Language Model-Based Conversational Tutoring System [21.139850269835858]
Conversational tutoring systems (CTSs) offer learning experiences through interactions based on natural language.
We discuss and evaluate a novel type of CTS that leverages recent advances in large language models (LLMs) in two ways.
The system enables AI-assisted content authoring by inducing an easily editable tutoring script automatically from a lesson text.
arXiv Detail & Related papers (2024-04-26T14:57:55Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Learning Rate Curriculum [75.98230528486401]
We propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC)
LeRaC uses a different learning rate for each layer of a neural network to create a data-agnostic curriculum during the initial training epochs.
We compare our approach with Curriculum by Smoothing (CBS), a state-of-the-art data-agnostic curriculum learning approach.
arXiv Detail & Related papers (2022-05-18T18:57:36Z) - A New Era: Intelligent Tutoring Systems Will Transform Online Learning
for Millions [41.647427931578335]
AI-powered learning can provide millions of learners with a highly personalized, active and practical learning experience.
We present the results of a comparative head-to-head study on learning outcomes for two popular online learning platforms.
arXiv Detail & Related papers (2022-03-03T18:55:33Z) - Federated Reconnaissance: Efficient, Distributed, Class-Incremental
Learning [1.244390243967322]
We describe a class of learning problems in which distributed clients learn new concepts independently and communicate that knowledge efficiently.
We find that prototypical networks are a strong approach in that they are robust to catastrophic forgetting while incorporating new information efficiently.
arXiv Detail & Related papers (2021-09-01T01:51:30Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z) - Towards Learning Convolutions from Scratch [34.71001535076825]
Convolution is one of the most essential components of architectures used in computer vision.
Current state-of-the-art architecture search algorithms use convolution as one of the existing modules rather than learning it from data.
We propose $beta$-LASSO, a simple variant of LASSO algorithm that learns architectures with local connections.
arXiv Detail & Related papers (2020-07-27T16:13:13Z) - SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching [81.45928589522032]
We parameterize modular task-oriented dialog systems using a Transformer-based auto-regressive language model.
We pre-train, on heterogeneous dialog corpora, a task-grounded response generation model.
Experiments show that SOLOIST creates new state-of-the-art on well-studied task-oriented dialog benchmarks.
arXiv Detail & Related papers (2020-05-11T17:58:34Z) - Efficient Crowd Counting via Structured Knowledge Transfer [122.30417437707759]
Crowd counting is an application-oriented task and its inference efficiency is crucial for real-world applications.
We propose a novel Structured Knowledge Transfer framework to generate a lightweight but still highly effective student network.
Our models obtain at least 6.5$times$ speed-up on an Nvidia 1080 GPU and even achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-03-23T08:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.