SETSum: Summarization and Visualization of Student Evaluations of
Teaching
- URL: http://arxiv.org/abs/2207.03640v1
- Date: Fri, 8 Jul 2022 01:40:11 GMT
- Title: SETSum: Summarization and Visualization of Student Evaluations of
Teaching
- Authors: Yinuo Hu, Shiyue Zhang, Viji Sathy, A. T. Panter, Mohit Bansal
- Abstract summary: Student Evaluations of Teaching (SETs) are widely used in colleges and universities.
SETSum provides organized illustrations of SET findings to instructors and other reviewers.
- Score: 74.76373136325032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Student Evaluations of Teaching (SETs) are widely used in colleges and
universities. Typically SET results are summarized for instructors in a static
PDF report. The report often includes summary statistics for quantitative
ratings and an unsorted list of open-ended student comments. The lack of
organization and summarization of the raw comments hinders those interpreting
the reports from fully utilizing informative feedback, making accurate
inferences, and designing appropriate instructional improvements. In this work,
we introduce a novel system, SETSum, that leverages sentiment analysis, aspect
extraction, summarization, and visualization techniques to provide organized
illustrations of SET findings to instructors and other reviewers. Ten
university professors from diverse departments serve as evaluators of the
system and all agree that SETSum helps them interpret SET results more
efficiently; and 6 out of 10 instructors prefer our system over the standard
static PDF report (while the remaining 4 would like to have both). This
demonstrates that our work holds the potential to reform the SET reporting
conventions in the future. Our code is available at
https://github.com/evahuyn/SETSum
Related papers
- Evaluating D-MERIT of Partial-annotation on Information Retrieval [77.44452769932676]
Retrieval models are often evaluated on partially-annotated datasets.
We show that using partially-annotated datasets in evaluation can paint a distorted picture.
arXiv Detail & Related papers (2024-06-23T08:24:08Z) - Using Generative Text Models to Create Qualitative Codebooks for Student Evaluations of Teaching [0.0]
Student evaluations of teaching (SETs) are important sources of feedback for educators.
A collection of SETs can also be useful to administrators as signals for courses or entire programs.
We discuss a novel method for analyzing SETs using natural language processing (NLP) and large language models (LLMs)
arXiv Detail & Related papers (2024-03-18T17:21:35Z) - Scalable Two-Minute Feedback: Digital, Lecture-Accompanying Survey as a Continuous Feedback Instrument [0.0]
Detailed feedback on courses and lecture content is essential for their improvement and also serves as a tool for reflection.
The article used a digital survey format as formative feedback which attempts to measure student stress in a quantitative part and to address the participants' reflection in a qualitative part.
The results show a low, but constant rate of feedback. Responses mostly cover topics of the lecture content or organizational aspects and were intensively used to report issues within the lecture.
arXiv Detail & Related papers (2023-10-30T08:14:26Z) - Improving Feedback from Automated Reviews of Student Spreadsheets [0.0]
We have developed an Intelligent Tutoring System (ITS) to review students' Excel submissions and provide individualized feedback automatically.
Although the lecturer only needs to provide one reference solution, the students' submissions are analyzed automatically.
To take the students' learning level into account, we have developed feedback levels for an ITS that provide gradually more information about the error.
arXiv Detail & Related papers (2023-10-14T08:12:39Z) - Unimodal and Multimodal Representation Training for Relation Extraction [0.0]
Multimodal integration of text, layout and visual information has achieved SOTA results in visually rich document understanding (VrDU) tasks, including relation extraction (RE)
Here, we demonstrate the value of shared representations for RE tasks by conducting experiments in which each data type is iteratively excluded during training.
While a bimodal text and layout approach performs best, we show that text is the most important single predictor of entity relations.
arXiv Detail & Related papers (2022-11-11T12:39:35Z) - Measuring "Why" in Recommender Systems: a Comprehensive Survey on the
Evaluation of Explainable Recommendation [87.82664566721917]
This survey is based on more than 100 papers from top-tier conferences like IJCAI, AAAI, TheWebConf, Recsys, UMAP, and IUI.
arXiv Detail & Related papers (2022-02-14T02:58:55Z) - Polarity in the Classroom: A Case Study Leveraging Peer Sentiment Toward
Scalable Assessment [4.588028371034406]
Accurately grading open-ended assignments in large or massive open online courses (MOOCs) is non-trivial.
In this work, we detail the process by which we create our domain-dependent lexicon and aspect-informed review form.
We end by analyzing validity and discussing conclusions from our corpus of over 6800 peer reviews from nine courses.
arXiv Detail & Related papers (2021-08-02T15:45:11Z) - Are Top School Students More Critical of Their Professors? Mining
Comments on RateMyProfessor.com [83.2634062100579]
Student reviews and comments on RateMyProfessor.com reflect realistic learning experiences of students.
Our study proves that student reviews and comments contain crucial information and can serve as essential references for enrollment in courses and universities.
arXiv Detail & Related papers (2021-01-23T20:01:36Z) - Unsupervised Reference-Free Summary Quality Evaluation via Contrastive
Learning [66.30909748400023]
We propose to evaluate the summary qualities without reference summaries by unsupervised contrastive learning.
Specifically, we design a new metric which covers both linguistic qualities and semantic informativeness based on BERT.
Experiments on Newsroom and CNN/Daily Mail demonstrate that our new evaluation method outperforms other metrics even without reference summaries.
arXiv Detail & Related papers (2020-10-05T05:04:14Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.