Exploring a Handwriting Programming Language for Educational Robots
- URL: http://arxiv.org/abs/2105.04963v1
- Date: Tue, 11 May 2021 12:00:34 GMT
- Title: Exploring a Handwriting Programming Language for Educational Robots
- Authors: Laila El-Hamamsy, Vaios Papaspyros, Taavet Kangur, Laura Mathex,
Christian Giang, Melissa Skweres, Barbara Bruno, Francesco Mondada
- Abstract summary: This study presents the development of a handwriting-based programming language for educational robots.
It allows students to program a robot by drawing symbols with ordinary pens and paper.
The system was evaluated in a preliminary test with eight teachers, developers and educational researchers.
- Score: 1.310461046819527
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, introducing computer science and educational robots in compulsory
education has received increasing attention. However, the use of screens in
classrooms is often met with resistance, especially in primary school. To
address this issue, this study presents the development of a handwriting-based
programming language for educational robots. Aiming to align better with
existing classroom practices, it allows students to program a robot by drawing
symbols with ordinary pens and paper. Regular smartphones are leveraged to
process the hand-drawn instructions using computer vision and machine learning
algorithms, and send the commands to the robot for execution. To align with the
local computer science curriculum, an appropriate playground and scaffolded
learning tasks were designed. The system was evaluated in a preliminary test
with eight teachers, developers and educational researchers. While the
participants pointed out that some technical aspects could be improved, they
also acknowledged the potential of the approach to make computer science
education in primary school more accessible.
Related papers
- $π_0$: A Vision-Language-Action Flow Model for General Robot Control [77.32743739202543]
We propose a novel flow matching architecture built on top of a pre-trained vision-language model (VLM) to inherit Internet-scale semantic knowledge.
We evaluate our model in terms of its ability to perform tasks in zero shot after pre-training, follow language instructions from people, and its ability to acquire new skills via fine-tuning.
arXiv Detail & Related papers (2024-10-31T17:22:30Z) - Handwritten Code Recognition for Pen-and-Paper CS Education [33.53124589437863]
Teaching Computer Science (CS) by having students write programs by hand on paper has key pedagogical advantages.
However, a key obstacle is the current lack of teaching methods and support software for working with and running handwritten programs.
Our approach integrates two innovative methods. The first combines OCR with an indentation recognition module and a language model designed for post-OCR error correction without introducing hallucinations.
arXiv Detail & Related papers (2024-08-07T21:02:17Z) - WIP: A Unit Testing Framework for Self-Guided Personalized Online Robotics Learning [3.613641107321095]
This paper focuses on creating a system for unit testing while integrating it into the course workflow.
In line with the framework's personalized student-centered approach, this method makes it easier for students to revise, and debug their programming work.
The course workflow updated to include unit tests will strengthen the learning environment and make it more interactive so that students can learn how to program robots in a self-guided fashion.
arXiv Detail & Related papers (2024-05-18T00:56:46Z) - From Keyboard to Chatbot: An AI-powered Integration Platform with Large-Language Models for Teaching Computational Thinking for Young Children [22.933382649048113]
We present a novel methodology with an AI-powered integration platform to effectively teach computational thinking for young children.
Young children can describe their desired task in natural language, while the system can respond with an easy-to-understand program.
A tangible robot can immediately execute the decomposed program and demonstrate the program's outcomes to young children.
arXiv Detail & Related papers (2024-05-01T04:29:21Z) - Open-World Object Manipulation using Pre-trained Vision-Language Models [72.87306011500084]
For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary.
We develop a simple approach, which leverages a pre-trained vision-language model to extract object-identifying information.
In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments.
arXiv Detail & Related papers (2023-03-02T01:55:10Z) - Language-Driven Representation Learning for Robotics [115.93273609767145]
Recent work in visual representation learning for robotics demonstrates the viability of learning from large video datasets of humans performing everyday tasks.
We introduce a framework for language-driven representation learning from human videos and captions.
We find that Voltron's language-driven learning outperform the prior-of-the-art, especially on targeted problems requiring higher-level control.
arXiv Detail & Related papers (2023-02-24T17:29:31Z) - Printable Flexible Robots for Remote Learning [0.0]
Students design flexible robotic components using CAD software, upload their designs to a remote 3D printing station, monitor the print with a web camera, and inspect the components with lab staff.
At the end of the course, students will have iterated through several designs and created fluidically-driven soft robots.
arXiv Detail & Related papers (2022-07-15T19:51:54Z) - Learning Language-Conditioned Robot Behavior from Offline Data and
Crowd-Sourced Annotation [80.29069988090912]
We study the problem of learning a range of vision-based manipulation tasks from a large offline dataset of robot interaction.
We propose to leverage offline robot datasets with crowd-sourced natural language labels.
We find that our approach outperforms both goal-image specifications and language conditioned imitation techniques by more than 25%.
arXiv Detail & Related papers (2021-09-02T17:42:13Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - An Experience of Introducing Primary School Children to Programming
using Ozobots (Practical Report) [10.213226970992666]
A recent trend is to introduce basic programming concepts already very early on at primary school level.
Schools and teachers are often neither equipped nor trained appropriately, and the best way to move from initial "unplugged" activities to creating programs on a computer are still a matter of open debate.
We describe our experience of a small INTERREG-project aiming at supporting local primary schools in introducing children to programming concepts using Ozobot robots.
arXiv Detail & Related papers (2020-08-28T07:36:07Z) - OpenBot: Turning Smartphones into Robots [95.94432031144716]
Current robots are either expensive or make significant compromises on sensory richness, computational power, and communication capabilities.
We propose to leverage smartphones to equip robots with extensive sensor suites, powerful computational abilities, state-of-the-art communication channels, and access to a thriving software ecosystem.
We design a small electric vehicle that costs $50 and serves as a robot body for standard Android smartphones.
arXiv Detail & Related papers (2020-08-24T18:04:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.