An Exploratory Analysis of Feedback Types Used in Online Coding
Exercises
- URL: http://arxiv.org/abs/2206.03077v2
- Date: Wed, 9 Nov 2022 11:41:33 GMT
- Title: An Exploratory Analysis of Feedback Types Used in Online Coding
Exercises
- Authors: Natalie Kiesler
- Abstract summary: This research aims at the identification of feedback types applied by CodingBat, Scratch and Blockly.
The study revealed difficulties in identifying clear-cut boundaries between feedback types.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Online coding environments can help support computing students gain
programming practice at their own pace. Especially informative feedback can be
beneficial during such self-guided, independent study phases. This research
aims at the identification of feedback types applied by CodingBat, Scratch and
Blockly. Tutoring feedback as coined by Susanne Narciss along with the
specification of subtypes by Keuning, Jeuring and Heeren constitute the
theoretical basis. Accordingly, the five categories of elaborated feedback
(knowledge about task requirements, knowledge about concepts, knowledge about
mistakes, knowledge about how to proceed, and knowledge about meta-cognition)
and their subtypes were utilized for the analysis of available feedback
options. The study revealed difficulties in identifying clear-cut boundaries
between feedback types, as the offered feedback usually integrates more than
one type or subtype. Moreover, currently defined feedback types do not
rigorously distinguish individualized and generic feedback. The lack of
granularity is also evident in the absence of subtypes relating to the
knowledge type of the task. The analysis thus has implications for the future
design and investigation of applied tutoring feedback. It encourages future
research on feedback types and their implementation in the context of
programming exercises to define feedback types that match the demands of novice
programmers.
Related papers
- Information Types in Product Reviews [5.202085660445395]
We devise a typology of 24 communicative goals in sentences from the product review domain.
In experiments, we find that the combination of classes in the typology forecasts helpfulness and sentiment of reviews.
Characterizing the types of information in reviews unlocks many opportunities for more effective consumption of this genre.
arXiv Detail & Related papers (2025-02-20T07:44:04Z) - You're (Not) My Type -- Can LLMs Generate Feedback of Specific Types for Introductory Programming Tasks? [0.4779196219827508]
This paper aims to generate specific types of feedback for programming tasks using Large Language Models (LLMs)
We revisit existing feedback to capture the specifics of the generated feedback, such as randomness, uncertainty, and degrees of variation.
Results have implications for future feedback research with regard to, for example, feedback effects and learners' informational needs.
arXiv Detail & Related papers (2024-12-04T17:57:39Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Mining patterns in syntax trees to automate code reviews of student solutions for programming exercises [0.0]
We introduce ECHO, a machine learning method to automate the reuse of feedback in educational code reviews.
Based on annotations from both automated linting tools and human reviewers, we show that ECHO can accurately and quickly predict appropriate feedback annotations.
arXiv Detail & Related papers (2024-04-26T14:03:19Z) - Scalable Two-Minute Feedback: Digital, Lecture-Accompanying Survey as a Continuous Feedback Instrument [0.0]
Detailed feedback on courses and lecture content is essential for their improvement and also serves as a tool for reflection.
The article used a digital survey format as formative feedback which attempts to measure student stress in a quantitative part and to address the participants' reflection in a qualitative part.
The results show a low, but constant rate of feedback. Responses mostly cover topics of the lecture content or organizational aspects and were intensively used to report issues within the lecture.
arXiv Detail & Related papers (2023-10-30T08:14:26Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Bridging the Gap: A Survey on Integrating (Human) Feedback for Natural
Language Generation [68.9440575276396]
This survey aims to provide an overview of the recent research that has leveraged human feedback to improve natural language generation.
First, we introduce an encompassing formalization of feedback, and identify and organize existing research into a taxonomy following this formalization.
Second, we discuss how feedback can be described by its format and objective, and cover the two approaches proposed to use feedback (either for training or decoding): directly using the feedback or training feedback models.
Third, we provide an overview of the nascent field of AI feedback, which exploits large language models to make judgments based on a set of principles and minimize the need for
arXiv Detail & Related papers (2023-05-01T17:36:06Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.