Course Difficulty Estimation Based on Mapping of Bloom's Taxonomy and
ABET Criteria
- URL: http://arxiv.org/abs/2112.01241v1
- Date: Tue, 16 Nov 2021 12:53:21 GMT
- Title: Course Difficulty Estimation Based on Mapping of Bloom's Taxonomy and
ABET Criteria
- Authors: Premalatha M, Suganya G, Viswanathan V, G Jignesh Chowdary
- Abstract summary: We propose a methodology that estimates the difficulty level of a course.
The estimated difficulty level is validated based on the history of grades secured by the students.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Current Educational system uses grades or marks to assess the performance of
the student. The marks or grades a students scores depends on different
parameters, the main parameter being the difficulty level of a course.
Computation of this difficulty level may serve as a support for both the
students and teachers to fix the level of training needed for successful
completion of course. In this paper, we proposed a methodology that estimates
the difficulty level of a course by mapping the Bloom's Taxonomy action words
along with Accreditation Board for Engineering and Technology (ABET) criteria
and learning outcomes. The estimated difficulty level is validated based on the
history of grades secured by the students.
Related papers
- Gaining Insights into Group-Level Course Difficulty via Differential Course Functioning [3.9829166809129095]
This study introduces Differential Course Functioning (DCF) as an Item Response Theory (IRT)-based CA methodology.
DCF controls for student performance levels and examines whether significant differences exist in how distinct student groups succeed in a course.
arXiv Detail & Related papers (2024-05-07T14:19:11Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Faithful Knowledge Distillation [75.59907631395849]
We focus on two crucial questions with regard to a teacher-student pair: (i) do the teacher and student disagree at points close to correctly classified dataset examples, and (ii) is the distilled student as confident as the teacher around dataset examples?
These are critical questions when considering the deployment of a smaller student network trained from a robust teacher within a safety-critical setting.
arXiv Detail & Related papers (2023-06-07T13:41:55Z) - Distantly-Supervised Named Entity Recognition with Adaptive Teacher
Learning and Fine-grained Student Ensemble [56.705249154629264]
Self-training teacher-student frameworks are proposed to improve the robustness of NER models.
In this paper, we propose an adaptive teacher learning comprised of two teacher-student networks.
Fine-grained student ensemble updates each fragment of the teacher model with a temporal moving average of the corresponding fragment of the student, which enhances consistent predictions on each model fragment against noise.
arXiv Detail & Related papers (2022-12-13T12:14:09Z) - Continual Learning For On-Device Environmental Sound Classification [63.81276321857279]
We propose a simple and efficient continual learning method for on-device environmental sound classification.
Our method selects the historical data for the training by measuring the per-sample classification uncertainty.
arXiv Detail & Related papers (2022-07-15T12:13:04Z) - Parameter-Efficient and Student-Friendly Knowledge Distillation [83.56365548607863]
We present a parameter-efficient and student-friendly knowledge distillation method, namely PESF-KD, to achieve efficient and sufficient knowledge transfer.
Experiments on a variety of benchmarks show that PESF-KD can significantly reduce the training cost while obtaining competitive results compared to advanced online distillation methods.
arXiv Detail & Related papers (2022-05-28T16:11:49Z) - Auxiliary Task Guided Interactive Attention Model for Question
Difficulty Prediction [6.951136079043972]
We propose a multi-task method with an interactive attention mechanism, Qdiff, for jointly predicting Bloom's taxonomy and difficulty levels of academic questions.
The proposed learning method would help learn representations that capture the relationship between Bloom's taxonomy and difficulty labels.
arXiv Detail & Related papers (2022-05-24T19:55:30Z) - Impacts of Students Academic Performance Trajectories on Final Academic
Success [0.0]
We apply a Hidden Markov Model (HMM) to provide a standard and intuitive classification over students' academic-performance levels.
Based on student transcript data from University of Central Florida, our proposed HMM is trained using sequences of students' course grades for each semester.
arXiv Detail & Related papers (2022-01-21T15:32:35Z) - Towards Equity and Algorithmic Fairness in Student Grade Prediction [2.9189409618561966]
This work addresses equity of educational outcome and fairness of AI with respect to race.
We trial several strategies for both label and instance balancing to attempt to minimize differences in algorithm performance with respect to race.
We find that an adversarial learning approach, combined with grade label balancing, achieved by far the fairest results.
arXiv Detail & Related papers (2021-05-14T01:12:01Z) - Investigating Class-level Difficulty Factors in Multi-label
Classification Problems [23.51529285126783]
This work investigates the use of class-level difficulty factors in multi-label classification problems for the first time.
Four difficulty factors are proposed: frequency, visual variation, semantic abstraction, and class co-occurrence.
These difficulty factors are shown to have several potential applications including the prediction of class-level performance across datasets.
arXiv Detail & Related papers (2020-05-01T15:06:53Z) - CurricularFace: Adaptive Curriculum Learning Loss for Deep Face
Recognition [79.92240030758575]
We propose a novel Adaptive Curriculum Learning loss (CurricularFace) that embeds the idea of curriculum learning into the loss function.
Our CurricularFace adaptively adjusts the relative importance of easy and hard samples during different training stages.
arXiv Detail & Related papers (2020-04-01T08:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.