Data Interpreter: An LLM Agent For Data Science
- URL: http://arxiv.org/abs/2402.18679v3
- Date: Tue, 12 Mar 2024 17:26:53 GMT
- Title: Data Interpreter: An LLM Agent For Data Science
- Authors: Sirui Hong, Yizhang Lin, Bang Liu, Bangbang Liu, Binhao Wu, Danyang
Li, Jiaqi Chen, Jiayi Zhang, Jinlin Wang, Li Zhang, Lingyao Zhang, Min Yang,
Mingchen Zhuge, Taicheng Guo, Tuo Zhou, Wei Tao, Wenyi Wang, Xiangru Tang,
Xiangtao Lu, Xiawu Zheng, Xinbing Liang, Yaying Fei, Yuheng Cheng, Zongze Xu,
Chenglin Wu
- Abstract summary: The Data Interpreter is a solution designed to solve with code.
It emphasizes three pivotal techniques to augment problem-solving in data science.
It showed a 26% increase in the MATH dataset and a remarkable 112% improvement in open-ended tasks.
- Score: 43.99482533437711
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Model (LLM)-based agents have demonstrated remarkable
effectiveness. However, their performance can be compromised in data science
scenarios that require real-time data adjustment, expertise in optimization due
to complex dependencies among various tasks, and the ability to identify
logical errors for precise reasoning. In this study, we introduce the Data
Interpreter, a solution designed to solve with code that emphasizes three
pivotal techniques to augment problem-solving in data science: 1) dynamic
planning with hierarchical graph structures for real-time data adaptability;2)
tool integration dynamically to enhance code proficiency during execution,
enriching the requisite expertise;3) logical inconsistency identification in
feedback, and efficiency enhancement through experience recording. We evaluate
the Data Interpreter on various data science and real-world tasks. Compared to
open-source baselines, it demonstrated superior performance, exhibiting
significant improvements in machine learning tasks, increasing from 0.86 to
0.95. Additionally, it showed a 26% increase in the MATH dataset and a
remarkable 112% improvement in open-ended tasks. The solution will be released
at https://github.com/geekan/MetaGPT.
Related papers
- AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval [93.96463520716759]
Large language model (LLM) agents have demonstrated impressive capability in utilizing external tools and knowledge to boost accuracy and reduce hallucinations.
Here, we introduce AvaTaR, a novel framework that optimize an LLM agent to effectively use the provided tools and improve its performance on a given task/domain.
We find AvaTaR consistently outperforms state-of-the-art approaches across all four challenging tasks and exhibits strong generalization ability when applied to novel cases.
arXiv Detail & Related papers (2024-06-17T04:20:02Z) - Unveiling the Impact of Coding Data Instruction Fine-Tuning on Large Language Models Reasoning [64.5243480989869]
Instruction Fine-Tuning (IFT) significantly enhances the zero-shot capabilities of pretrained Large Language Models (LLMs)
This paper investigates how coding data impact LLMs' reasoning capacities during the IFT stage.
arXiv Detail & Related papers (2024-05-30T23:20:25Z) - Exploring Prompting Methods for Mitigating Class Imbalance through Synthetic Data Generation with Large Language Models [39.347666307218006]
Large language models (LLMs) have demonstrated impressive in-context learning capabilities across various domains.
Inspired by this, our study explores the effectiveness of LLMs in generating realistic data to mitigate class imbalance.
Our findings indicate that using CSV format, balancing classes, and employing unique variable mapping produces realistic and reliable data.
arXiv Detail & Related papers (2024-04-15T17:49:16Z) - LESS: Selecting Influential Data for Targeted Instruction Tuning [64.78894228923619]
We propose LESS, an efficient algorithm to estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection.
We show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks.
Our method goes beyond surface form cues to identify data that the necessary reasoning skills for the intended downstream application.
arXiv Detail & Related papers (2024-02-06T19:18:04Z) - Dynamics of Instruction Tuning: Each Ability of Large Language Models
Has Its Own Growth Pace [21.015261553612643]
We present a dataset with over 40k instances across ten abilities and examine instruction-tuned models with 7b to 33b parameters.
Our study reveals three primary findings: (i) Despite the models' overall performance being tied to data and parameter scale, individual abilities have different sensitivities to these factors.
Human-curated data strongly outperforms synthetic data from GPT-4 in efficiency and can constantly enhance model performance with volume increases.
arXiv Detail & Related papers (2023-10-30T15:37:10Z) - Automatic Data Augmentation via Invariance-Constrained Learning [94.27081585149836]
Underlying data structures are often exploited to improve the solution of learning tasks.
Data augmentation induces these symmetries during training by applying multiple transformations to the input data.
This work tackles these issues by automatically adapting the data augmentation while solving the learning task.
arXiv Detail & Related papers (2022-09-29T18:11:01Z) - Is More Data Better? Re-thinking the Importance of Efficiency in Abusive
Language Detection with Transformers-Based Active Learning [13.369630848913305]
We show that transformers-based active learning is a promising approach to substantially raise efficiency whilst still maintaining high effectiveness.
This approach requires a fraction of labeled data to reach performance equivalent to training over the full dataset.
arXiv Detail & Related papers (2022-09-21T08:47:06Z) - Nemo: Guiding and Contextualizing Weak Supervision for Interactive Data
Programming [77.38174112525168]
We present Nemo, an end-to-end interactive Supervision system that improves overall productivity of WS learning pipeline by an average 20% (and up to 47% in one task) compared to the prevailing WS supervision approach.
arXiv Detail & Related papers (2022-03-02T19:57:32Z) - Improving the Performance of Fine-Grain Image Classifiers via Generative
Data Augmentation [0.5161531917413706]
We develop Data Augmentation from Proficient Pre-Training of Robust Generative Adrial Networks (DAPPER GAN)
DAPPER GAN is an ML analytics support tool that automatically generates novel views of training images.
We experimentally evaluate this technique on the Stanford Cars dataset, demonstrating improved vehicle make and model classification accuracy.
arXiv Detail & Related papers (2020-08-12T15:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.