OpineBot: Class Feedback Reimagined Using a Conversational LLM
- URL: http://arxiv.org/abs/2401.15589v1
- Date: Sun, 28 Jan 2024 07:12:56 GMT
- Title: OpineBot: Class Feedback Reimagined Using a Conversational LLM
- Authors: Henansh Tanwar, Kunal Shrivastva, Rahul Singh, Dhruv Kumar
- Abstract summary: OpineBot is a novel system employing large language models (LLMs) to conduct personalized, conversational class feedback.
We assessed OpineBot's effectiveness in a user study with 20 students from an Indian university.
- Score: 4.304917890202721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional class feedback systems often fall short, relying on static,
unengaging surveys offering little incentive for student participation. To
address this, we present OpineBot, a novel system employing large language
models (LLMs) to conduct personalized, conversational class feedback via
chatbot interface. We assessed OpineBot's effectiveness in a user study with 20
students from an Indian university's Operating-Systems class, utilizing surveys
and interviews to analyze their experiences. Findings revealed a resounding
preference for OpineBot compared to conventional methods, highlighting its
ability to engage students, produce deeper feedback, offering a dynamic survey
experience. This research represents a work in progress, providing early
results, marking a significant step towards revolutionizing class feedback
through LLM-based technology, promoting student engagement, and leading to
richer data for instructors. This ongoing research presents preliminary
findings and marks a notable advancement in transforming classroom feedback
using LLM-based technology to enhance student engagement and generate
comprehensive data for educators.
Related papers
- Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Show, Don't Tell: Aligning Language Models with Demonstrated Feedback [54.10302745921713]
Demonstration ITerated Task Optimization (DITTO) directly aligns language model outputs to a user's demonstrated behaviors.
We evaluate DITTO's ability to learn fine-grained style and task alignment across domains such as news articles, emails, and blog posts.
arXiv Detail & Related papers (2024-06-02T23:13:56Z) - Generating Situated Reflection Triggers about Alternative Solution Paths: A Case Study of Generative AI for Computer-Supported Collaborative Learning [3.2721068185888127]
We present a proof-of-concept application to offer students dynamic and contextualized feedback.
Specifically, we augment an Online Programming Exercise bot for a college-level Cloud Computing course with ChatGPT.
We demonstrate that LLMs can be used to generate highly situated reflection triggers that incorporate details of the collaborative discussion happening in context.
arXiv Detail & Related papers (2024-04-28T17:56:14Z) - Ruffle&Riley: Insights from Designing and Evaluating a Large Language Model-Based Conversational Tutoring System [21.139850269835858]
Conversational tutoring systems (CTSs) offer learning experiences through interactions based on natural language.
We discuss and evaluate a novel type of CTS that leverages recent advances in large language models (LLMs) in two ways.
The system enables AI-assisted content authoring by inducing an easily editable tutoring script automatically from a lesson text.
arXiv Detail & Related papers (2024-04-26T14:57:55Z) - PREDILECT: Preferences Delineated with Zero-Shot Language-based
Reasoning in Reinforcement Learning [2.7387720378113554]
Preference-based reinforcement learning (RL) has emerged as a new field in robot learning.
We use the zero-shot capabilities of a large language model (LLM) to reason from the text provided by humans.
In both a simulated scenario and a user study, we reveal the effectiveness of our work by analyzing the feedback and its implications.
arXiv Detail & Related papers (2024-02-23T16:30:05Z) - UltraFeedback: Boosting Language Models with Scaled AI Feedback [99.4633351133207]
We present textscUltraFeedback, a large-scale, high-quality, and diversified AI feedback dataset.
Our work validates the effectiveness of scaled AI feedback data in constructing strong open-source chat language models.
arXiv Detail & Related papers (2023-10-02T17:40:01Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z) - Approximating Online Human Evaluation of Social Chatbots with Prompting [11.657633779338724]
Existing evaluation metrics aim to automate offline user evaluation and approximate human judgment of pre-curated dialogs.
We propose an approach to approximate online human evaluation leveraging large language models (LLMs) from the GPT family.
We introduce a new Dialog system Evaluation framework based on Prompting (DEP), which enables a fully automatic evaluation pipeline.
arXiv Detail & Related papers (2023-04-11T14:45:01Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Automated Personalized Feedback Improves Learning Gains in an
Intelligent Tutoring System [34.19909376464836]
We investigate how automated, data-driven, personalized feedback in a large-scale intelligent tutoring system (ITS) improves student learning outcomes.
We propose a machine learning approach to generate personalized feedback, which takes individual needs of students into account.
We utilize state-of-the-art machine learning and natural language processing techniques to provide the students with personalized hints, Wikipedia-based explanations, and mathematical hints.
arXiv Detail & Related papers (2020-05-05T18:30:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.