Closing the Block-to-Text Gap: A Domain-Specific JavaScript Editor for Early Computational Thinking
- URL: http://arxiv.org/abs/2512.00012v1
- Date: Wed, 15 Oct 2025 18:54:53 GMT
- Title: Closing the Block-to-Text Gap: A Domain-Specific JavaScript Editor for Early Computational Thinking
- Authors: Andrei Enea,
- Abstract summary: This paper presents a web-based JavaScript editor designed to help children aged 8-10 transition from block-based programming.<n>The system encourages creativity, self-correction, and sustained engagement, offering educators a practical tool for authentic coding.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a web-based JavaScript editor designed to help children aged 8-10 transition from block-based to text-based programming. The system introduces a simplified domain-specific language (DSL) focused on visual art, combining authentic JavaScript syntax with immediate, creative visual feedback. A four-week pilot study (N = 15) demonstrated significant improvements in computational thinking skills (mean CTCI gain of +10.9, p < 0.001), along with a 70% reduction in syntax errors. Participants advanced from basic drawing functions to sophisticated algorithmic designs using loops, conditionals, and animations. By integrating constructionist principles with a visual-first DSL, this research contributes a validated pedagogical framework for easing the block-to-text transition in K-12 computer science education. The system encourages creativity, self-correction, and sustained engagement, offering educators a practical, scalable tool for introducing authentic coding to young learners.
Related papers
- Autograder+: A Multi-Faceted AI Framework for Rich Pedagogical Feedback in Programming Education [0.5529795221640363]
Autograder+ is designed to shift autograding from a purely summative process to a formative learning experience.<n>It introduces two key capabilities: automated feedback generation using a fine-tuned Large Language Model, and visualization of student code submissions to uncover learning patterns.
arXiv Detail & Related papers (2025-10-30T11:41:50Z) - Language-Inspired Relation Transfer for Few-shot Class-Incremental Learning [42.923762020491495]
We propose a new Language-inspired Relation Transfer (LRT) paradigm to understand objects by joint visual clues and text depictions.<n>Our proposed LRT outperforms the state-of-the-art models by over $13%$ and $7%$ on the final session of mini-ImageNet and CIFAR-100 FSCIL benchmarks.
arXiv Detail & Related papers (2025-01-10T10:59:27Z) - Handwritten Code Recognition for Pen-and-Paper CS Education [33.53124589437863]
Teaching Computer Science (CS) by having students write programs by hand on paper has key pedagogical advantages.
However, a key obstacle is the current lack of teaching methods and support software for working with and running handwritten programs.
Our approach integrates two innovative methods. The first combines OCR with an indentation recognition module and a language model designed for post-OCR error correction without introducing hallucinations.
arXiv Detail & Related papers (2024-08-07T21:02:17Z) - IntCoOp: Interpretability-Aware Vision-Language Prompt Tuning [94.52149969720712]
IntCoOp learns to jointly align attribute-level inductive biases and class embeddings during prompt-tuning.
IntCoOp improves CoOp by 7.35% in average performance across 10 diverse datasets.
arXiv Detail & Related papers (2024-06-19T16:37:31Z) - ChatScratch: An AI-Augmented System Toward Autonomous Visual Programming
Learning for Children Aged 6-12 [13.943361631775113]
ChatScratch is an AI-augmented system to facilitate autonomous programming learning for young children.
ChatScratch employs structured interactive storyboards and visual cues to overcome artist's block.
arXiv Detail & Related papers (2024-02-07T15:55:51Z) - Visually-augmented pretrained language models for NLP tasks without
images [77.74849855049523]
Existing solutions often rely on explicit images for visual knowledge augmentation.
We propose a novel textbfVisually-textbfAugmented fine-tuning approach.
Our approach can consistently improve the performance of BERT, RoBERTa, BART, and T5 at different scales.
arXiv Detail & Related papers (2022-12-15T16:13:25Z) - Giving Feedback on Interactive Student Programs with Meta-Exploration [74.5597783609281]
Developing interactive software, such as websites or games, is a particularly engaging way to learn computer science.
Standard approaches require instructors to manually grade student-implemented interactive programs.
Online platforms that serve millions, like Code.org, are unable to provide any feedback on assignments for implementing interactive programs.
arXiv Detail & Related papers (2022-11-16T10:00:23Z) - CLIP also Understands Text: Prompting CLIP for Phrase Understanding [65.59857372525664]
Contrastive Language-Image Pretraining (CLIP) efficiently learns visual concepts by pre-training with natural language supervision.
In this paper, we find that the text encoder of CLIP actually demonstrates strong ability for phrase understanding, and can even significantly outperform popular language models such as BERT with a properly designed prompt.
arXiv Detail & Related papers (2022-10-11T23:35:18Z) - LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of
Vision & Language Models [67.19124099815645]
We propose a novel Language-Aware Soft Prompting (LASP) learning method to alleviate base class overfitting.
LASP is inherently amenable to including, during training, virtual classes, i.e. class names for which no visual samples are available.
LASP matches and surpasses, for the first time, the accuracy on novel classes obtained by hand-crafted prompts and CLIP for 8 out of 11 test datasets.
arXiv Detail & Related papers (2022-10-03T17:56:35Z) - ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback [54.142719510638614]
In this paper, we frame the problem of providing feedback as few-shot classification.
A meta-learner adapts to give feedback to student code on a new programming question from just a few examples by instructors.
Our approach was successfully deployed to deliver feedback to 16,000 student exam-solutions in a programming course offered by a tier 1 university.
arXiv Detail & Related papers (2021-07-23T22:41:28Z) - Handwriting Quality Analysis using Online-Offline Models [4.61479186986544]
This work is part of an innovative e-learning project allowing the development of an advanced digital educational tool.
It automatically detects mistakes, gives real-time on-line feedback for children's writing, and helps teachers comprehend and evaluate children's writing skills.
arXiv Detail & Related papers (2020-10-09T14:33:56Z) - Contrastive Code Representation Learning [95.86686147053958]
We show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics.
We propose ContraCode: a contrastive pre-training task that learns code functionality, not form.
arXiv Detail & Related papers (2020-07-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.