CodeComplex: Dataset for Worst-Case Time Complexity Prediction
- URL: http://arxiv.org/abs/2401.08719v2
- Date: Tue, 24 Dec 2024 08:24:05 GMT
- Title: CodeComplex: Dataset for Worst-Case Time Complexity Prediction
- Authors: Seung-Yeop Baik, Joonghyuk Hahn, Jungin Kim, Mingi Jeon, Aditi, Yo-Sub Han, Sang-Ki Ko,
- Abstract summary: Code time complexity prediction involves various intricate factors such as the input range of variables and conditional loops.<n>Current benchmarks fall short of providing a rigorous assessment due to limited data, language constraints, and insufficient labeling.<n>We introduce CodeComplex, the first robust and extensive dataset designed to evaluate LLMs' reasoning abilities in predicting code time complexity.
- Score: 7.974618854858136
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reasoning ability of Large Language Models (LLMs) is a crucial ability, especially in complex decision-making tasks. One significant task to show LLMs' reasoning capability is code time complexity prediction, which involves various intricate factors such as the input range of variables and conditional loops. Current benchmarks fall short of providing a rigorous assessment due to limited data, language constraints, and insufficient labeling. They do not consider time complexity based on input representation and merely evaluate whether predictions fall into the same class, lacking a measure of how close incorrect predictions are to the correct ones. To address these dependencies, we introduce CodeComplex, the first robust and extensive dataset designed to evaluate LLMs' reasoning abilities in predicting code time complexity. CodeComplex comprises 4,900 Java codes and an equivalent number of Python codes, overcoming language and labeling constraints, carefully annotated with complexity labels based on input characteristics by a panel of algorithmic experts. Additionally, we propose specialized evaluation metrics for the reasoning of complexity prediction tasks, offering a more precise and reliable assessment of LLMs' reasoning capabilities. We release our dataset (https://github.com/sybaik1/CodeComplex-Data) and baseline models (https://github.com/sybaik1/CodeComplex-Models) publicly to encourage the relevant (NLP, SE, and PL) communities to utilize and participate in this research.
Related papers
- BigO(Bench) -- Can LLMs Generate Code with Controlled Time and Space Complexity? [20.550427148810556]
BigO(Bench) is a novel coding benchmark designed to evaluate the capabilities of generative language models in understanding and generating code with specified time and space complexities.
BigO(Bench) includes tooling to infer the algorithmic complexity of any Python function from profiling measurements.
We present results from evaluating multiple state-of-the-art language models on this benchmark, highlighting their strengths and weaknesses in handling complexity requirements.
arXiv Detail & Related papers (2025-03-19T14:19:57Z) - TCProF: Time-Complexity Prediction SSL Framework [3.803993344850168]
Time complexity is a theoretic measure to determine the amount of time the algorithm needs for its execution.
We introduce TCProF: a Time-Complexity Prediction SSL Framework.
TCProF significantly boosts performance by integrating our augmentation, symbolic modules, and a co-training mechanism.
arXiv Detail & Related papers (2025-02-10T12:39:33Z) - OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models [76.59316249991657]
Large language models (LLMs) for code have become indispensable in various domains, including code generation, reasoning tasks and agent systems.
While open-access code LLMs are increasingly approaching the performance levels of proprietary models, high-quality code LLMs remain limited.
We introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an "open cookbook" for the research community.
arXiv Detail & Related papers (2024-11-07T17:47:25Z) - Contextualized Data-Wrangling Code Generation in Computational Notebooks [131.26365849822932]
We propose an automated approach, CoCoMine, to mine data-wrangling code generation examples with clear multi-modal contextual dependency.
We construct CoCoNote, a dataset containing 58,221 examples for Contextualized Data-wrangling Code generation in Notebooks.
Experiment results demonstrate the significance of incorporating data context in data-wrangling code generation.
arXiv Detail & Related papers (2024-09-20T14:49:51Z) - Benchmarking Complex Instruction-Following with Multiple Constraints Composition [72.82640456309821]
How to evaluate the ability of complex instruction-following of large language models (LLMs) has become a critical research problem.
Existing benchmarks mainly focus on modeling different types of constraints in human instructions while neglecting the composition of different constraints.
We propose ComplexBench, a benchmark for comprehensively evaluating the ability of LLMs to follow complex instructions composed of multiple constraints.
arXiv Detail & Related papers (2024-07-04T14:50:45Z) - MapCoder: Multi-Agent Code Generation for Competitive Problem Solving [3.3856216159724983]
We introduce a new approach to code generation tasks leveraging multi-agent prompting.
Our framework, MapCoder, consists of four LLM agents specifically designed to emulate the stages of program synthesis.
Our method consistently delivers superior performance across various programming languages.
arXiv Detail & Related papers (2024-05-18T22:10:15Z) - CoCoST: Automatic Complex Code Generation with Online Searching and Correctness Testing [51.00909683314142]
Large Language Models have revolutionized code generation ability by converting natural language descriptions into executable code.
CoCoST framework enhances complex code generation by online searching for more information with planned queries and correctness testing for code refinement.
CoCoST is validated through rigorous experiments on the DS-1000 and ClassEval datasets.
arXiv Detail & Related papers (2024-03-20T13:33:55Z) - Automatizing Software Cognitive Complexity Reduction through Integer
Linear Programming [1.1970409518725493]
Recently, we modeled software cognitive complexity reduction as an optimization problem and we proposed an approach to assist developers on this task.
This approach enumerates sequences of code extraction operations until a stopping criterion is met. As a result, it returns the minimal sequence of code extraction operations that is able to reduce the cognitive complexity of a code to the given threshold.
arXiv Detail & Related papers (2024-02-08T10:53:00Z) - SparseCoder: Identifier-Aware Sparse Transformer for File-Level Code
Summarization [51.67317895094664]
This paper studies file-level code summarization, which can assist programmers in understanding and maintaining large source code projects.
We propose SparseCoder, an identifier-aware sparse transformer for effectively handling long code sequences.
arXiv Detail & Related papers (2024-01-26T09:23:27Z) - MuSR: Testing the Limits of Chain-of-thought with Multistep Soft Reasoning [63.80739044622555]
We introduce MuSR, a dataset for evaluating language models on soft reasoning tasks specified in a natural language narrative.
This dataset has two crucial features. First, it is created through a novel neurosymbolic synthetic-to-natural generation algorithm.
Second, our dataset instances are free text narratives corresponding to real-world domains of reasoning.
arXiv Detail & Related papers (2023-10-24T17:59:20Z) - Can Large Language Models Understand Real-World Complex Instructions? [54.86632921036983]
Large language models (LLMs) can understand human instructions, but struggle with complex instructions.
Existing benchmarks are insufficient to assess LLMs' ability to understand complex instructions.
We propose CELLO, a benchmark for evaluating LLMs' ability to follow complex instructions systematically.
arXiv Detail & Related papers (2023-09-17T04:18:39Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - TASTY: A Transformer based Approach to Space and Time complexity [0.4724825031148411]
Code based Language Models (LMs) have shown very promising results in the field of software engineering.
We create a labelled dataset of code snippets spanning multiple languages.
We propose to use LMs to find space complexities from code, and to the best of our knowledge, this is the first attempt to do so.
arXiv Detail & Related papers (2023-05-06T03:37:44Z) - Enhancing Semantic Code Search with Multimodal Contrastive Learning and
Soft Data Augmentation [50.14232079160476]
We propose a new approach with multimodal contrastive learning and soft data augmentation for code search.
We conduct extensive experiments to evaluate the effectiveness of our approach on a large-scale dataset with six programming languages.
arXiv Detail & Related papers (2022-04-07T08:49:27Z) - Competition-Level Code Generation with AlphaCode [74.87216298566942]
We introduce AlphaCode, a system for code generation that can create novel solutions to problems that require deeper reasoning.
In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3%.
arXiv Detail & Related papers (2022-02-08T23:16:31Z) - Detecting Requirements Smells With Deep Learning: Experiences,
Challenges and Future Work [9.44316959798363]
This work aims to improve the previous work by creating a manually labeled dataset and using ensemble learning, Deep Learning (DL), and techniques such as word embeddings and transfer learning to overcome the generalization problem.
The current findings show that the dataset is unbalanced and which class examples should be added more.
arXiv Detail & Related papers (2021-08-06T12:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.