Leveraging Peer Feedback to Improve Visualization Education
- URL: http://arxiv.org/abs/2001.07549v2
- Date: Mon, 1 Jun 2020 16:04:30 GMT
- Title: Leveraging Peer Feedback to Improve Visualization Education
- Authors: Zachariah Beasley and Alon Friedman and Les Piegl and Paul Rosen
- Abstract summary: We discuss the construction and application of peer review in a computer science visualization course.
We evaluate student projects, peer review text, and a post-course questionnaire from 3 semesters of mixed undergraduate and graduate courses.
- Score: 4.679788938455095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Peer review is a widely utilized pedagogical feedback mechanism for engaging
students, which has been shown to improve educational outcomes. However, we
find limited discussion and empirical measurement of peer review in
visualization coursework. In addition to engagement, peer review provides
direct and diverse feedback and reinforces recently-learned course concepts
through critical evaluation of others' work. In this paper, we discuss the
construction and application of peer review in a computer science visualization
course, including: projects that reuse code and visualizations in a
feedback-guided, continual improvement process and a peer review rubric to
reinforce key course concepts. To measure the effectiveness of the approach, we
evaluate student projects, peer review text, and a post-course questionnaire
from 3 semesters of mixed undergraduate and graduate courses. The results
indicate that course concepts are reinforced with peer review---82% reported
learning more because of peer review, and 75% of students recommended
continuing it. Finally, we provide a road-map for adapting peer review to other
visualization courses to produce more highly engaged students.
Related papers
- Enhancing Student Feedback Using Predictive Models in Visual Literacy Courses [2.366162376710038]
This study uses Na"ive Bayes modeling to analyze peer review data obtained from an undergraduate visual literacy course over five years.
Our findings highlight the utility of Na"ive Bayes modeling, particularly in the analysis of student comments based on parts of speech.
arXiv Detail & Related papers (2024-05-23T20:02:36Z) - Improving the Validity of Automatically Generated Feedback via
Reinforcement Learning [50.067342343957876]
We propose a framework for feedback generation that optimize both correctness and alignment using reinforcement learning (RL)
Specifically, we use GPT-4's annotations to create preferences over feedback pairs in an augmented dataset for training via direct preference optimization (DPO)
arXiv Detail & Related papers (2024-03-02T20:25:50Z) - Scalable Two-Minute Feedback: Digital, Lecture-Accompanying Survey as a Continuous Feedback Instrument [0.0]
Detailed feedback on courses and lecture content is essential for their improvement and also serves as a tool for reflection.
The article used a digital survey format as formative feedback which attempts to measure student stress in a quantitative part and to address the participants' reflection in a qualitative part.
The results show a low, but constant rate of feedback. Responses mostly cover topics of the lecture content or organizational aspects and were intensively used to report issues within the lecture.
arXiv Detail & Related papers (2023-10-30T08:14:26Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - A literature survey on student feedback assessment tools and their usage
in sentiment analysis [0.0]
We evaluate the effectiveness of various in-class feedback assessment methods such as Kahoot!, Mentimeter, Padlet, and polling.
We propose a sentiment analysis model for extracting the explicit suggestions from the students' qualitative feedback comments.
arXiv Detail & Related papers (2021-09-09T06:56:30Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Polarity in the Classroom: A Case Study Leveraging Peer Sentiment Toward
Scalable Assessment [4.588028371034406]
Accurately grading open-ended assignments in large or massive open online courses (MOOCs) is non-trivial.
In this work, we detail the process by which we create our domain-dependent lexicon and aspect-informed review form.
We end by analyzing validity and discussing conclusions from our corpus of over 6800 peer reviews from nine courses.
arXiv Detail & Related papers (2021-08-02T15:45:11Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Code Review in the Classroom [57.300604527924015]
Young developers in a classroom setting provide a clear picture of the potential favourable and problematic areas of the code review process.
Their feedback suggests that the process has been well received with some points to better the process.
This paper can be used as guidelines to perform code reviews in the classroom.
arXiv Detail & Related papers (2020-04-19T06:07:45Z) - Facial Feedback for Reinforcement Learning: A Case Study and Offline
Analysis Using the TAMER Framework [51.237191651923666]
We investigate the potential of agent learning from trainers' facial expressions via interpreting them as evaluative feedback.
With designed CNN-RNN model, our analysis shows that telling trainers to use facial expressions and competition can improve the accuracies for estimating positive and negative feedback.
Our results with a simulation experiment show that learning solely from predicted feedback based on facial expressions is possible.
arXiv Detail & Related papers (2020-01-23T17:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.