Quantifying the Impact on Software Complexity of Composable Inductive
Programming using Zoea
- URL: http://arxiv.org/abs/2005.08211v1
- Date: Sun, 17 May 2020 10:44:39 GMT
- Title: Quantifying the Impact on Software Complexity of Composable Inductive
Programming using Zoea
- Authors: Edward McDaid and Sarah McDaid
- Abstract summary: Composable inductive programming as implemented in the Zoea programming language is a simple declarative approach to software development.
This paper presents the results of a quantitative comparison of the software complexity of equivalent code implemented in Zoea and also in a conventional programming language.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Composable inductive programming as implemented in the Zoea programming
language is a simple declarative approach to software development. At the
language level it is evident that Zoea is significantly simpler than all
mainstream languages. However, until now we have only had anecdotal evidence
that software produced with Zoea is also simpler than equivalent software
produced with conventional languages. This paper presents the results of a
quantitative comparison of the software complexity of equivalent code
implemented in Zoea and also in a conventional programming language. The study
uses a varied set of programming tasks from a popular programming language
chrestomathy. Results are presented for relative program complexity using two
established metrics and also for relative program size. It was found that Zoea
programs are approximately 50% the complexity of equivalent programs in a
conventional language and on average equal in size. The results suggest that
current programming languages (as opposed to software requirements) are the
largest contributor to software complexity and that significant complexity
could be avoided through an inductive programming approach.
Related papers
- CodeComplex: A Time-Complexity Dataset for Bilingual Source Codes [6.169110187130671]
We introduce CodeComplex, a novel source code dataset where each code is manually annotated with a corresponding worst-case time complexity.
To the best of our knowledge, CodeComplex stands as the most extensive code dataset tailored for predicting complexity.
We present the outcomes of our experiments employing various baseline models, leveraging state-of-the-art neural models in code comprehension.
arXiv Detail & Related papers (2024-01-16T06:54:44Z) - Design of Chain-of-Thought in Math Problem Solving [8.582686316167973]
Chain-of-Thought (CoT) plays a crucial role in reasoning for math problem solving.
We compare conventional natural language CoT with various program CoTs, including the self-describing program, the comment-describing program, and the non-describing program.
We find that program CoTs often have superior effectiveness in math problem solving.
arXiv Detail & Related papers (2023-09-20T04:17:28Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Understanding Programs by Exploiting (Fuzzing) Test Cases [26.8259045248779]
We propose to incorporate the relationship between inputs and possible outputs/behaviors into learning, for achieving a deeper semantic understanding of programs.
To obtain inputs that are representative enough to trigger the execution of most part of the code, we resort to fuzz testing and propose fuzz tuning.
The effectiveness of the proposed method is verified on two program understanding tasks including code clone detection and code classification, and it outperforms current state-of-the-arts by large margins.
arXiv Detail & Related papers (2023-05-23T01:51:46Z) - LEVER: Learning to Verify Language-to-Code Generation with Execution [64.36459105535]
We propose LEVER, a simple approach to improve language-to-code generation by learning to verify the generated programs with their execution results.
Specifically, we train verifiers to determine whether a program sampled from the LLMs is correct or not based on the natural language input, the program itself and its execution results.
LEVER consistently improves over the base code LLMs(4.6% to 10.9% with code-davinci) and achieves new state-of-the-art results on all of them.
arXiv Detail & Related papers (2023-02-16T18:23:22Z) - A Divide-Align-Conquer Strategy for Program Synthesis [8.595181704811889]
We show that compositional segmentation can be applied in the programming by examples setting to divide the search for large programs across multiple smaller program synthesis problems.
A structural alignment of the constituent parts in the input and output leads to pairwise correspondences used to guide the program search.
arXiv Detail & Related papers (2023-01-08T19:10:55Z) - Natural Language to Code Translation with Execution [82.52142893010563]
Execution result--minimum Bayes risk decoding for program selection.
We show that it improves the few-shot performance of pretrained code models on natural-language-to-code tasks.
arXiv Detail & Related papers (2022-04-25T06:06:08Z) - Competition-Level Code Generation with AlphaCode [74.87216298566942]
We introduce AlphaCode, a system for code generation that can create novel solutions to problems that require deeper reasoning.
In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3%.
arXiv Detail & Related papers (2022-02-08T23:16:31Z) - Searching for More Efficient Dynamic Programs [61.79535031840558]
We describe a set of program transformations, a simple metric for assessing the efficiency of a transformed program, and a search procedure to improve this metric.
We show that in practice, automated search can find substantial improvements to the initial program.
arXiv Detail & Related papers (2021-09-14T20:52:55Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.