Towards the design of model-based means and methods to characterize and diagnose teachers' digital maturity
- URL: http://arxiv.org/abs/2411.02025v1
- Date: Mon, 04 Nov 2024 12:21:26 GMT
- Title: Towards the design of model-based means and methods to characterize and diagnose teachers' digital maturity
- Authors: Christine Michel, Laƫtitia Pierrot,
- Abstract summary: This article examines how models of teacher digital maturity can be combined to produce a unified version that can be used to design diagnostic tools and methods.
The models and how their constituent dimensions contribute to the determination of maturity levels were analyzed.
- Score: 0.3683202928838613
- License:
- Abstract: This article examines how models of teacher digital maturity can be combined to produce a unified version that can be used to design diagnostic tools and methods. 11 models applicable to the field of compulsory education were identified through a literature review. The models and how their constituent dimensions contribute to the determination of maturity levels were analyzed. The summary highlights the diversity of the dimensions used and the fact that digital maturity is only partially taken into account. What's more, most of these models focus on the most recent maturity levels associated with innovative or pioneering teachers. The models tend to exclude teachers who are not digital users or who have a low level of digital use, but who are present in the French context. In the final part of the article, a proposal for a unified model of teachers' digital maturity, MUME, which addresses these two issues, is described, together with the preliminary results of a study aimed at designing a diagnostic method.
Related papers
- Dual-Teacher Ensemble Models with Double-Copy-Paste for 3D Semi-Supervised Medical Image Segmentation [31.460549289419923]
Semi-supervised learning (SSL) techniques address the high labeling costs in 3D medical image segmentation.
We introduce the Staged Selective Ensemble (SSE) module, which selects different ensemble methods based on the characteristics of the samples.
Experimental results demonstrate the effectiveness of our proposed method in 3D medical image segmentation tasks.
arXiv Detail & Related papers (2024-10-15T11:23:15Z) - Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models [62.5501109475725]
Knowledge distillation (KD) is a technique that compresses large teacher models by training smaller student models to mimic them.
This paper introduces Online Knowledge Distillation (OKD), where the teacher network integrates small online modules to concurrently train with the student model.
OKD achieves or exceeds the performance of leading methods in various model architectures and sizes, reducing training time by up to fourfold.
arXiv Detail & Related papers (2024-09-19T07:05:26Z) - ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model [49.587821411012705]
We propose ComKD-CLIP: Comprehensive Knowledge Distillation for Contrastive Language-Image Pre-traning Model.
It distills the knowledge from a large teacher CLIP model into a smaller student model, ensuring comparable performance with significantly reduced parameters.
EduAttention explores the cross-relationships between text features extracted by the teacher model and image features extracted by the student model.
arXiv Detail & Related papers (2024-08-08T01:12:21Z) - Alternate Diverse Teaching for Semi-supervised Medical Image Segmentation [62.021828104757745]
We propose AD-MT, an alternate diverse teaching approach in a teacher-student framework.
It involves a single student model and two non-trainable teacher models that are momentum-updated periodically and randomly in an alternate fashion.
arXiv Detail & Related papers (2023-11-29T02:44:54Z) - Periodically Exchange Teacher-Student for Source-Free Object Detection [7.222926042027062]
Source-free object detection (SFOD) aims to adapt the source detector to unlabeled target domain data in the absence of source domain data.
Most SFOD methods follow the same self-training paradigm using mean-teacher (MT) framework where the student model is guided by only one single teacher model.
We propose the Periodically Exchange Teacher-Student (PETS) method, a simple yet novel approach that introduces a multiple-teacher framework consisting of a static teacher, a dynamic teacher, and a student model.
arXiv Detail & Related papers (2023-11-23T11:30:54Z) - Demystifying Digital Twin Buzzword: A Novel Generic Evaluation Model [0.0]
Despite the growing popularity of digital twins (DT) developments, there is a lack of common understanding and definition for important concepts of DT.
This article proposes a four-dimensional evaluation framework to assess the maturity of digital twins across different domains.
arXiv Detail & Related papers (2023-11-21T19:56:26Z) - Improving Neural Topic Models with Wasserstein Knowledge Distillation [0.8962460460173959]
We propose a knowledge distillation framework to compress a contextualized topic model without loss in topic quality.
Experiments show that the student trained with knowledge distillation achieves topic coherence much higher than that of the original student model.
arXiv Detail & Related papers (2023-03-27T16:07:44Z) - EmbedDistill: A Geometric Knowledge Distillation for Information
Retrieval [83.79667141681418]
Large neural models (such as Transformers) achieve state-of-the-art performance for information retrieval (IR)
We propose a novel distillation approach that leverages the relative geometry among queries and documents learned by the large teacher model.
We show that our approach successfully distills from both dual-encoder (DE) and cross-encoder (CE) teacher models to 1/10th size asymmetric students that can retain 95-97% of the teacher performance.
arXiv Detail & Related papers (2023-01-27T22:04:37Z) - Digital Editions as Distant Supervision for Layout Analysis of Printed
Books [76.29918490722902]
We describe methods for exploiting this semantic markup as distant supervision for training and evaluating layout analysis models.
In experiments with several model architectures on the half-million pages of the Deutsches Textarchiv (DTA), we find a high correlation of these region-level evaluation methods with pixel-level and word-level metrics.
We discuss the possibilities for improving accuracy with self-training and the ability of models trained on the DTA to generalize to other historical printed books.
arXiv Detail & Related papers (2021-12-23T16:51:53Z) - Reinforced Multi-Teacher Selection for Knowledge Distillation [54.72886763796232]
knowledge distillation is a popular method for model compression.
Current methods assign a fixed weight to a teacher model in the whole distillation.
Most of the existing methods allocate an equal weight to every teacher model.
In this paper, we observe that, due to the complexity of training examples and the differences in student model capability, learning differentially from teacher models can lead to better performance of student models distilled.
arXiv Detail & Related papers (2020-12-11T08:56:39Z) - MED-TEX: Transferring and Explaining Knowledge with Less Data from
Pretrained Medical Imaging Models [38.12462659279648]
A small student model is learned with less data by distilling knowledge from a cumbersome pretrained teacher model.
An explainer module is introduced to highlight the regions of an input that are important for the predictions of the teacher model.
Our framework outperforms on the knowledge distillation and model interpretation tasks compared to state-of-the-art methods on a fundus dataset.
arXiv Detail & Related papers (2020-08-06T11:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.