Synthesizing Tasks for Block-based Programming
- URL: http://arxiv.org/abs/2006.16913v3
- Date: Thu, 5 Nov 2020 00:00:55 GMT
- Title: Synthesizing Tasks for Block-based Programming
- Authors: Umair Z. Ahmed, Maria Christakis, Aleksandr Efremov, Nigel Fernandez,
Ahana Ghosh, Abhik Roychoudhury, Adish Singla
- Abstract summary: We propose a novel methodology to automatically generate a set $(rm Tout, rm Cout)$ of new tasks along with solution codes.
Our algorithm operates by first mutating code $rm Cin$ to obtain a set of codes $rm Cout$.
- Score: 72.45475843387183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Block-based visual programming environments play a critical role in
introducing computing concepts to K-12 students. One of the key pedagogical
challenges in these environments is in designing new practice tasks for a
student that match a desired level of difficulty and exercise specific
programming concepts. In this paper, we formalize the problem of synthesizing
visual programming tasks. In particular, given a reference visual task $\rm
T^{in}$ and its solution code $\rm C^{in}$, we propose a novel methodology to
automatically generate a set $\{(\rm T^{out}, \rm C^{out})\}$ of new tasks
along with solution codes such that tasks $\rm T^{in}$ and $\rm T^{out}$ are
conceptually similar but visually dissimilar. Our methodology is based on the
realization that the mapping from the space of visual tasks to their solution
codes is highly discontinuous; hence, directly mutating reference task $\rm
T^{in}$ to generate new tasks is futile. Our task synthesis algorithm operates
by first mutating code $\rm C^{in}$ to obtain a set of codes $\{\rm C^{out}\}$.
Then, the algorithm performs symbolic execution over a code $\rm C^{out}$ to
obtain a visual task $\rm T^{out}$; this step uses the Monte Carlo Tree Search
(MCTS) procedure to guide the search in the symbolic tree. We demonstrate the
effectiveness of our algorithm through an extensive empirical evaluation and
user study on reference tasks taken from the \emph{Hour of Code: Classic Maze}
challenge by \emph{Code.org} and the \emph{Intro to Programming with Karel}
course by \emph{CodeHS.com}.
Related papers
- Metalearning with Very Few Samples Per Task [19.78398372660794]
We consider a binary classification setting where tasks are related by a shared representation.
Here, the amount of data is measured in terms of the number of tasks $t$ that we need to see and the number of samples $n$ per task.
Our work also yields a characterization of distribution-free multitask learning and reductions between meta and multitask learning.
arXiv Detail & Related papers (2023-12-21T16:06:44Z) - Scaling Distributed Multi-task Reinforcement Learning with Experience
Sharing [38.883540444516605]
DARPA launched the ShELL program, which aims to explore how experience sharing can benefit distributed lifelong learning agents.
We conduct both theoretical and empirical research on distributed multi-task reinforcement learning (RL), where a group of $N$ agents collaboratively solves $M$ tasks.
We propose an algorithm called DistMT-LSVI, where each agent independently learns $epsilon$-optimal policies for all $M$ tasks.
arXiv Detail & Related papers (2023-07-11T22:58:53Z) - Simplifying and Understanding State Space Models with Diagonal Linear
RNNs [56.33053691749856]
This work disposes of the discretization step, and proposes a model based on vanilla Diagonal Linear RNNs.
We empirically show that, despite being conceptually much simpler, $mathrmDLR$ is as performant as previously-proposed SSMs.
We also characterize the expressivity of SSMs and attention-based models via a suite of $13$ synthetic sequence-to-sequence tasks.
arXiv Detail & Related papers (2022-12-01T18:53:06Z) - Multi-Task Imitation Learning for Linear Dynamical Systems [50.124394757116605]
We study representation learning for efficient imitation learning over linear systems.
We find that the imitation gap over trajectories generated by the learned target policy is bounded by $tildeOleft( frack n_xHN_mathrmshared + frack n_uN_mathrmtargetright)$.
arXiv Detail & Related papers (2022-12-01T00:14:35Z) - A Spectral Approach to Item Response Theory [6.5268245109828005]
We propose a emphnew item estimation algorithm for the Rasch model.
The core of our algorithm is the computation of the stationary distribution of a Markov chain defined on an item-item graph.
Experiments on synthetic and real-life datasets show that our algorithm is scalable, accurate, and competitive with the most commonly used methods in the literature.
arXiv Detail & Related papers (2022-10-09T18:57:08Z) - Reward-Mixing MDPs with a Few Latent Contexts are Learnable [75.17357040707347]
We consider episodic reinforcement learning in reward-mixing Markov decision processes (RMMDPs)
Our goal is to learn a near-optimal policy that nearly maximizes the $H$ time-step cumulative rewards in such a model.
arXiv Detail & Related papers (2022-10-05T22:52:00Z) - On the Power of Multitask Representation Learning in Linear MDP [61.58929164172968]
This paper presents analyses for the statistical benefit of multitask representation learning in linear Markov Decision Process (MDP)
We first discover a emphLeast-Activated-Feature-Abundance (LAFA) criterion, denoted as $kappa$, with which we prove that a straightforward least-square algorithm learns a policy which is $tildeO(H2sqrtfrackappa mathcalC(Phi)2 kappa dNT+frackappa dn)
arXiv Detail & Related papers (2021-06-15T11:21:06Z) - On the Theory of Transfer Learning: The Importance of Task Diversity [114.656572506859]
We consider $t+1$ tasks parameterized by functions of the form $f_j circ h$ in a general function class $mathcalF circ mathcalH$.
We show that for diverse training tasks the sample complexity needed to learn the shared representation across the first $t$ training tasks scales as $C(mathcalH) + t C(mathcalF)$.
arXiv Detail & Related papers (2020-06-20T20:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.