Neural Task Synthesis for Visual Programming
        - URL: http://arxiv.org/abs/2305.18342v3
- Date: Sun, 14 Jan 2024 11:52:11 GMT
- Title: Neural Task Synthesis for Visual Programming
- Authors: Victor-Alexandru P\u{a}durean, Georgios Tzannetos, Adish Singla
- Abstract summary: We seek to design neural models that can automatically generate programming tasks for a given specification in the context of visual programming domains.
We propose a novel neuro-symbolic technique, NeurTaskSyn, that can synthesize programming tasks for a specification given in the form of desired programming concepts.
- Score: 22.918489170257704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract:   Generative neural models hold great promise in enhancing programming
education by synthesizing new content. We seek to design neural models that can
automatically generate programming tasks for a given specification in the
context of visual programming domains. Despite the recent successes of large
generative models like GPT-4, our initial results show that these models are
ineffective in synthesizing visual programming tasks and struggle with logical
and spatial reasoning. We propose a novel neuro-symbolic technique,
NeurTaskSyn, that can synthesize programming tasks for a specification given in
the form of desired programming concepts exercised by its solution code and
constraints on the visual task. NeurTaskSyn has two components: the first
component is trained via imitation learning procedure to generate possible
solution codes, and the second component is trained via reinforcement learning
procedure to guide an underlying symbolic execution engine that generates
visual tasks for these codes. We demonstrate the effectiveness of NeurTaskSyn
through an extensive empirical evaluation and a qualitative study on reference
tasks taken from the Hour of Code: Classic Maze challenge by Code-dot-org and
the Intro to Programming with Karel course by CodeHS-dot-com.
 
      
        Related papers
        - Learning to Solve Abstract Reasoning Problems with Neurosymbolic Program   Synthesis and Task Generation [0.8701566919381223]
 We present TransCoder, a method for solving abstract problems based on neural program synthesis.
We conduct a comprehensive analysis of decisions made by the generative module of the proposed architecture.
 arXiv  Detail & Related papers  (2024-10-06T13:42:53Z)
- NoviCode: Generating Programs from Natural Language Utterances by   Novices [59.71218039095155]
 We present NoviCode, a novel NL Programming task which takes as input an API and a natural language description by a novice non-programmer.
We show that NoviCode is indeed a challenging task in the code synthesis domain, and that generating complex code from non-technical instructions goes beyond the current Text-to-Code paradigm.
 arXiv  Detail & Related papers  (2024-07-15T11:26:03Z)
- IPSynth: Interprocedural Program Synthesis for Software Security   Implementation [3.1119394814248253]
 We introduce IP Synth, a novel inter-procedural program synthesis approach that automatically learns the specification of the tactic.
Our results show that our approach can accurately locate corresponding spots in the program, synthesize needed code snippets, add them to the program, and outperform ChatGPT in inter-procedural tactic synthesis tasks.
 arXiv  Detail & Related papers  (2024-03-16T07:12:24Z)
- The Role of Foundation Models in Neuro-Symbolic Learning and Reasoning [54.56905063752427]
 Neuro-Symbolic AI (NeSy) holds promise to ensure the safe deployment of AI systems.
Existing pipelines that train the neural and symbolic components sequentially require extensive labelling.
New architecture, NeSyGPT, fine-tunes a vision-language foundation model to extract symbolic features from raw data.
 arXiv  Detail & Related papers  (2024-02-02T20:33:14Z)
- Synthesizing a Progression of Subtasks for Block-Based Visual
  Programming Tasks [21.33708484899808]
 We propose a novel synthesis algorithm that generates a progression of subtasks that are high-quality, well-spaced in terms of their complexity.
We show the utility of our synthesis algorithm in improving the efficacy of AI agents for solving tasks in the Karel programming environment.
 arXiv  Detail & Related papers  (2023-05-27T16:24:36Z)
- CodeRL: Mastering Code Generation through Pretrained Models and Deep
  Reinforcement Learning [92.36705236706678]
 "CodeRL" is a new framework for program synthesis tasks through pretrained LMs and deep reinforcement learning.
During inference, we introduce a new generation procedure with a critical sampling strategy.
For the model backbones, we extended the encoder-decoder architecture of CodeT5 with enhanced learning objectives.
 arXiv  Detail & Related papers  (2022-07-05T02:42:15Z)
- Learning Neuro-Symbolic Skills for Bilevel Planning [63.388694268198655]
 Decision-making is challenging in robotics environments with continuous object-centric states, continuous actions, long horizons, and sparse feedback.
Hierarchical approaches, such as task and motion planning (TAMP), address these challenges by decomposing decision-making into two or more levels of abstraction.
Our main contribution is a method for learning parameterized polices in combination with operators and samplers.
 arXiv  Detail & Related papers  (2022-06-21T19:01:19Z)
- From {Solution Synthesis} to {Student Attempt Synthesis} for Block-Based
  Visual Programming Tasks [20.64766977405438]
 We introduce a novel benchmark, StudentSyn, centered around the following challenge.
For a given student, synthesize the student's attempt on a new target task after observing the student's attempt on a fixed reference task.
This challenge is akin to that of program synthesis; however, instead of a solution (i.e., program an expert would write), the goal here is to synthesize a student attempt (i.e., program that a given student would write)
 arXiv  Detail & Related papers  (2022-05-03T01:32:47Z)
- Toward Neural-Network-Guided Program Synthesis and Verification [26.706421573322952]
 We propose a novel framework of program and invariant synthesis called neural network-guided synthesis.
We first show that, by designing and training neural networks, we can extract logical formulas over integers from the weights and biases of the trained neural networks.
Based on the idea, we have implemented a tool to synthesize formulas from positive/negative examples and implication constraints.
 arXiv  Detail & Related papers  (2021-03-17T03:09:05Z)
- Synthesize, Execute and Debug: Learning to Repair for Neural Program
  Synthesis [81.54148730967394]
 We propose SED, a neural program generation framework that incorporates synthesis, execution, and debug stages.
SED first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs.
On Karel, a challenging input-output program synthesis benchmark, SED reduces the error rate of the neural program synthesizer itself by a considerable margin, and outperforms the standard beam search for decoding.
 arXiv  Detail & Related papers  (2020-07-16T04:15:47Z)
- Learning to learn generative programs with Memoised Wake-Sleep [52.439550543743536]
 We study a class of neuro-symbolic generative models in which neural networks are used both for inference and as priors over symbolic, data-generating programs.
We propose the Memoised Wake-Sleep (MWS) algorithm, which extends Wake Sleep by explicitly storing and reusing the best programs discovered by the inference network throughout training.
 arXiv  Detail & Related papers  (2020-07-06T23:51:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.