An Exploratory Analysis of Feedback Types Used in Online Coding
Exercises
- URL: http://arxiv.org/abs/2206.03077v2
- Date: Wed, 9 Nov 2022 11:41:33 GMT
- Title: An Exploratory Analysis of Feedback Types Used in Online Coding
Exercises
- Authors: Natalie Kiesler
- Abstract summary: This research aims at the identification of feedback types applied by CodingBat, Scratch and Blockly.
The study revealed difficulties in identifying clear-cut boundaries between feedback types.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Online coding environments can help support computing students gain
programming practice at their own pace. Especially informative feedback can be
beneficial during such self-guided, independent study phases. This research
aims at the identification of feedback types applied by CodingBat, Scratch and
Blockly. Tutoring feedback as coined by Susanne Narciss along with the
specification of subtypes by Keuning, Jeuring and Heeren constitute the
theoretical basis. Accordingly, the five categories of elaborated feedback
(knowledge about task requirements, knowledge about concepts, knowledge about
mistakes, knowledge about how to proceed, and knowledge about meta-cognition)
and their subtypes were utilized for the analysis of available feedback
options. The study revealed difficulties in identifying clear-cut boundaries
between feedback types, as the offered feedback usually integrates more than
one type or subtype. Moreover, currently defined feedback types do not
rigorously distinguish individualized and generic feedback. The lack of
granularity is also evident in the absence of subtypes relating to the
knowledge type of the task. The analysis thus has implications for the future
design and investigation of applied tutoring feedback. It encourages future
research on feedback types and their implementation in the context of
programming exercises to define feedback types that match the demands of novice
programmers.
Related papers
- Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Mining patterns in syntax trees to automate code reviews of student solutions for programming exercises [0.0]
We introduce ECHO, a machine learning method to automate the reuse of feedback in educational code reviews.
Based on annotations from both automated linting tools and human reviewers, we show that ECHO can accurately and quickly predict appropriate feedback annotations.
arXiv Detail & Related papers (2024-04-26T14:03:19Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Scalable Two-Minute Feedback: Digital, Lecture-Accompanying Survey as a Continuous Feedback Instrument [0.0]
Detailed feedback on courses and lecture content is essential for their improvement and also serves as a tool for reflection.
The article used a digital survey format as formative feedback which attempts to measure student stress in a quantitative part and to address the participants' reflection in a qualitative part.
The results show a low, but constant rate of feedback. Responses mostly cover topics of the lecture content or organizational aspects and were intensively used to report issues within the lecture.
arXiv Detail & Related papers (2023-10-30T08:14:26Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - Impact of Feedback Type on Explanatory Interactive Learning [4.039245878626345]
Explanatory Interactive Learning (XIL) collects user feedback on visual model explanations to implement a Human-in-the-Loop (HITL) based interactive learning scenario.
We compare the effectiveness of two different user feedback types in image classification tasks.
We show that identifying and annotating spurious image features that a model finds salient results in superior classification and explanation accuracy than user feedback that tells a model to focus on valid image features.
arXiv Detail & Related papers (2022-09-26T07:33:54Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Pattern Learning for Detecting Defect Reports and Improvement Requests
in App Reviews [4.460358746823561]
In this study, we follow novel approaches that target this absence of actionable insights by classifying reviews as defect reports and requests for improvement.
We employ a supervised system that is capable of learning lexico-semantic patterns through genetic programming.
We show that the automatically learned patterns outperform the manually created ones, to be generated.
arXiv Detail & Related papers (2020-04-19T08:13:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.