Kompetenzerwerbsf\"orderung durch E-Assessment: Individuelle
Kompetenzerfassung am Beispiel des Fachs Mathematik
- URL: http://arxiv.org/abs/2108.09072v1
- Date: Fri, 20 Aug 2021 08:55:09 GMT
- Title: Kompetenzerwerbsf\"orderung durch E-Assessment: Individuelle
Kompetenzerfassung am Beispiel des Fachs Mathematik
- Authors: Roy Meissner, Claudia Ruhland, Katja Ihsberner
- Abstract summary: We present a concept of how micro- and e-assessments can be used for the mathematical domain to automatically determine acquired and missing individual skills.
The models required for this concept are a digitally prepared and annotated e-assessment item pool, a digital modeling of the domain that includes topics, necessary competencies, as well as introductory and continuative material.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this article, we present a concept of how micro- and e-assessments can be
used for the mathematical domain to automatically determine acquired and
missing individual skills and, based on these information, guide individuals to
acquire missing or additional skills in a software-supported process. The
models required for this concept are a digitally prepared and annotated
e-assessment item pool, a digital modeling of the domain that includes topics,
necessary competencies, as well as introductory and continuative material, as
well as a digital individual model, which can reliably record competencies and
integrates aspects about the loss of such.
Related papers
- Towards the design of model-based means and methods to characterize and diagnose teachers' digital maturity [0.3683202928838613]
This article examines how models of teacher digital maturity can be combined to produce a unified version that can be used to design diagnostic tools and methods.
The models and how their constituent dimensions contribute to the determination of maturity levels were analyzed.
arXiv Detail & Related papers (2024-11-04T12:21:26Z) - RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.
Many machine unlearning methods have been proposed that aim to 'erase' these datapoints from trained models.
We propose the RESTOR framework for machine unlearning based on the following dimensions.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Digital Accessibility Literacy: A Conceptual Framework for Training on Digital Accessibility [0.0]
This article takes up the current discourse on the description of literacy and uses it to develop the concept of digital accessibility literacy.
Digital accessibility literacy encompasses both the creation (encoding) and interpretation (decoding) of accessible digital content and technologies.
This comprehensive approach improves technical skills and instills ethical and social responsibility.
arXiv Detail & Related papers (2024-10-15T16:10:39Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - A Document-based Knowledge Discovery with Microservices Architecture [0.0]
We point out the key challenges in the context of knowledge discovery and present an approach to addressing these using a database architecture.
Our solution led to a conceptual design focusing on keyword extraction, calculation of documents, similarity in natural language, and programming language independent provision of the extracted information.
arXiv Detail & Related papers (2024-06-13T09:28:31Z) - How Beaufort, Neumann and Gates met? Subject integration with
spreadsheeting [0.0]
It is found that both students content knowledge and their digital skills developed more efficiently than in traditional course book and decontextualized digital environments.
The method presented here can be adapted to any paper-based problems whose solutions would be more effective in a digital environment.
arXiv Detail & Related papers (2023-08-31T20:02:42Z) - What and How of Machine Learning Transparency: Building Bespoke
Explainability Tools with Interoperable Algorithmic Components [77.87794937143511]
This paper introduces a collection of hands-on training materials for explaining data-driven predictive models.
These resources cover the three core building blocks of this technique: interpretable representation composition, data sampling and explanation generation.
arXiv Detail & Related papers (2022-09-08T13:33:25Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Digital Editions as Distant Supervision for Layout Analysis of Printed
Books [76.29918490722902]
We describe methods for exploiting this semantic markup as distant supervision for training and evaluating layout analysis models.
In experiments with several model architectures on the half-million pages of the Deutsches Textarchiv (DTA), we find a high correlation of these region-level evaluation methods with pixel-level and word-level metrics.
We discuss the possibilities for improving accuracy with self-training and the ability of models trained on the DTA to generalize to other historical printed books.
arXiv Detail & Related papers (2021-12-23T16:51:53Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.