Construction and Preliminary Validation of a Dynamic Programming Concept Inventory
- URL: http://arxiv.org/abs/2411.14655v1
- Date: Fri, 22 Nov 2024 01:01:43 GMT
- Title: Construction and Preliminary Validation of a Dynamic Programming Concept Inventory
- Authors: Matthew Ferland, Varun Nagaraj Rao, Arushi Arora, Drew van der Poel, Michael Luu, Randy Huynh, Freddy Reiber, Sandra Ossman, Seth Poulsen, Michael Shindler,
- Abstract summary: Concept inventories are standardized assessments that evaluate student understanding of key concepts within academic disciplines.
While prevalent across STEM fields, their development lags for advanced computer science topics like dynamic programming (DP)
We detail the iterative process used to formulate multiple-choice questions targeting known student misconceptions about DP concepts identified through prior research studies.
We conducted a preliminary psychometric validation by administering the D PCI to 172 undergraduate CS students finding our questions to be of appropriate difficulty and effectively discriminating between differing levels of student understanding.
- Score: 0.7389633345370871
- License:
- Abstract: Concept inventories are standardized assessments that evaluate student understanding of key concepts within academic disciplines. While prevalent across STEM fields, their development lags for advanced computer science topics like dynamic programming (DP) -- an algorithmic technique that poses significant conceptual challenges for undergraduates. To fill this gap, we developed and validated a Dynamic Programming Concept Inventory (DPCI). We detail the iterative process used to formulate multiple-choice questions targeting known student misconceptions about DP concepts identified through prior research studies. We discuss key decisions, tradeoffs, and challenges faced in crafting probing questions to subtly reveal these conceptual misunderstandings. We conducted a preliminary psychometric validation by administering the DPCI to 172 undergraduate CS students finding our questions to be of appropriate difficulty and effectively discriminating between differing levels of student understanding. Taken together, our validated DPCI will enable instructors to accurately assess student mastery of DP. Moreover, our approach for devising a concept inventory for an advanced theoretical computer science concept can guide future efforts to create assessments for other under-evaluated areas currently lacking coverage.
Related papers
- Investigating the Use of Productive Failure as a Design Paradigm for Learning Introductory Python Programming [7.8163934921246945]
Productive Failure (PF) is a learning approach where students tackle novel problems targeting concepts they have not yet learned, followed by a consolidation phase where these concepts are taught.
Recent application in STEM disciplines suggests that PF can help learners develop more robust conceptual knowledge.
We designed a novel PF-based learning activity that incorporated the unobtrusive collection of real-time heart-rate data from consumer-grade wearable sensors.
We found that although there was no difference in initial learning outcomes between the groups, students who followed the PF approach showed better knowledge retention and performance on delayed but similar tasks.
arXiv Detail & Related papers (2024-11-18T01:39:05Z) - Coding for Intelligence from the Perspective of Category [66.14012258680992]
Coding targets compressing and reconstructing data, and intelligence.
Recent trends demonstrate the potential homogeneity of these two fields.
We propose a novel problem of Coding for Intelligence from the category theory view.
arXiv Detail & Related papers (2024-07-01T07:05:44Z) - Editable Concept Bottleneck Models [36.38845338945026]
Concept Bottleneck Models (CBMs) have garnered much attention for their ability to elucidate the prediction process through a human-understandable concept layer.
In many scenarios, we always need to remove/insert some training data or new concepts from trained CBMs due to different reasons, such as privacy concerns, data mislabelling, spurious concepts, and concept annotation errors.
We propose Editable Concept Bottleneck Models (ECBMs) to address these challenges. Specifically, ECBMs support three different levels of data removal: concept-label-level, concept-level, and data-level.
arXiv Detail & Related papers (2024-05-24T11:55:46Z) - Towards Goal-oriented Intelligent Tutoring Systems in Online Education [69.06930979754627]
We propose a new task, named Goal-oriented Intelligent Tutoring Systems (GITS)
GITS aims to enable the student's mastery of a designated concept by strategically planning a customized sequence of exercises and assessment.
We propose a novel graph-based reinforcement learning framework, named Planning-Assessment-Interaction (PAI)
arXiv Detail & Related papers (2023-12-03T12:37:16Z) - Counterfactual Monotonic Knowledge Tracing for Assessing Students'
Dynamic Mastery of Knowledge Concepts [3.2687390531088414]
assessing students' dynamic mastery of knowledge concepts is crucial for offline teaching and online educational applications.
Since students' mastery of knowledge concepts is often unlabeled, existing KT methods rely on the implicit paradigm of historical practice to mastery of knowledge concepts.
We propose a principled approach called Counterfactual Monotonic Knowledge Tracing (CMKT)
arXiv Detail & Related papers (2023-08-07T07:57:26Z) - Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation [49.85548436111153]
We propose a novel framework named Set-to-Sequence Ranking-based Concept-aware Learning Path Recommendation (SRC)
SRC formulates the recommendation task under a set-to-sequence paradigm.
We conduct extensive experiments on two real-world public datasets and one industrial dataset.
arXiv Detail & Related papers (2023-06-07T08:24:44Z) - A Human-Centered Review of Algorithms in Decision-Making in Higher
Education [16.578096382702597]
We reviewed an extensive corpus of papers proposing algorithms for decision-making in higher education.
We found that the models are trending towards deep learning, and increased use of student personal data and protected attributes.
Despite the associated decrease in interpretability and explainability, current development predominantly fails to incorporate human-centered lenses.
arXiv Detail & Related papers (2023-02-12T02:30:50Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - A Competence-aware Curriculum for Visual Concepts Learning via Question
Answering [95.35905804211698]
We propose a competence-aware curriculum for visual concept learning in a question-answering manner.
We design a neural-symbolic concept learner for learning the visual concepts and a multi-dimensional Item Response Theory (mIRT) model for guiding the learning process.
Experimental results on CLEVR show that with a competence-aware curriculum, the proposed method achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-07-03T05:08:09Z) - Experiences and Lessons Learned Creating and Validating Concept
Inventories for Cybersecurity [0.0]
Cybersecurity Concept Inventory (CCI) is for students who have recently completed any first course in cybersecurity.
The Cybersecurity Curriculum Assessment (CCA) is for students who have recently completed an undergraduate major or track in cybersecurity.
Each assessment tool comprises 25 multiple-choice questions (MCQs) of various difficulties that target the same five core concepts.
arXiv Detail & Related papers (2020-04-10T22:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.