Enhancing Student Feedback Using Predictive Models in Visual Literacy Courses
- URL: http://arxiv.org/abs/2405.15026v1
- Date: Thu, 23 May 2024 20:02:36 GMT
- Title: Enhancing Student Feedback Using Predictive Models in Visual Literacy Courses
- Authors: Alon Friedman, Kevin Hawley, Paul Rosen, Md Dilshadur Rahman,
- Abstract summary: This study uses Na"ive Bayes modeling to analyze peer review data obtained from an undergraduate visual literacy course over five years.
Our findings highlight the utility of Na"ive Bayes modeling, particularly in the analysis of student comments based on parts of speech.
- Score: 2.366162376710038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Peer review is a popular feedback mechanism in higher education that actively engages students and provides researchers with a means to assess student engagement. However, there is little empirical support for the durability of peer review, particularly when using data predictive modeling to analyze student comments. This study uses Na\"ive Bayes modeling to analyze peer review data obtained from an undergraduate visual literacy course over five years. We expand on the research of Friedman and Rosen and Beasley et al. by focusing on the Na\"ive Bayes model of students' remarks. Our findings highlight the utility of Na\"ive Bayes modeling, particularly in the analysis of student comments based on parts of speech, where nouns emerged as the prominent category. Additionally, when examining students' comments using the visual peer review rubric, the lie factor emerged as the predominant factor. Comparing Na\"ive Bayes model to Beasley's approach, we found both help instructors map directions taken in the class, but the Na\"ive Bayes model provides a more specific outline for forecasting with a more detailed framework for identifying core topics within the course, enhancing the forecasting of educational directions. Through the application of the Holdout Method and $\mathrm{k}$-fold cross-validation with continuity correction, we have validated the model's predictive accuracy, underscoring its effectiveness in offering deep insights into peer review mechanisms. Our study findings suggest that using predictive modeling to assess student comments can provide a new way to better serve the students' classroom comments on their visual peer work. This can benefit courses by inspiring changes to course content, reinforcement of course content, modification of projects, or modifications to the rubric itself.
Related papers
- Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Scalable Two-Minute Feedback: Digital, Lecture-Accompanying Survey as a Continuous Feedback Instrument [0.0]
Detailed feedback on courses and lecture content is essential for their improvement and also serves as a tool for reflection.
The article used a digital survey format as formative feedback which attempts to measure student stress in a quantitative part and to address the participants' reflection in a qualitative part.
The results show a low, but constant rate of feedback. Responses mostly cover topics of the lecture content or organizational aspects and were intensively used to report issues within the lecture.
arXiv Detail & Related papers (2023-10-30T08:14:26Z) - Fairness-guided Few-shot Prompting for Large Language Models [93.05624064699965]
In-context learning can suffer from high instability due to variations in training examples, example order, and prompt formats.
We introduce a metric to evaluate the predictive bias of a fixed prompt against labels or a given attributes.
We propose a novel search strategy based on the greedy search to identify the near-optimal prompt for improving the performance of in-context learning.
arXiv Detail & Related papers (2023-03-23T12:28:25Z) - Predicting Desirable Revisions of Evidence and Reasoning in
Argumentative Writing [1.0878040851638]
We develop models to classify desirable evidence and desirable reasoning revisions in student argumentative writing.
We explore two ways to improve performance - using the essay context of the revision, and using the feedback students received before the revision.
Our results show that while a model using feedback information improves over a baseline model, models utilizing context - either alone or with feedback - are the most successful in identifying desirable revisions.
arXiv Detail & Related papers (2023-02-10T03:59:59Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Aspect Based Sentiment Analysis with Aspect-Specific Opinion Spans [66.77264982885086]
We present a neat and effective structured attention model by aggregating multiple linear-chain CRFs.
Such a design allows the model to extract aspect-specific opinion spans and then evaluate sentiment polarity by exploiting the extracted opinion features.
arXiv Detail & Related papers (2020-10-06T13:18:35Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z) - Leveraging Peer Feedback to Improve Visualization Education [4.679788938455095]
We discuss the construction and application of peer review in a computer science visualization course.
We evaluate student projects, peer review text, and a post-course questionnaire from 3 semesters of mixed undergraduate and graduate courses.
arXiv Detail & Related papers (2020-01-12T21:46:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.