Capture the Flag: Uncovering Data Insights with Large Language Models
- URL: http://arxiv.org/abs/2312.13876v1
- Date: Thu, 21 Dec 2023 14:20:06 GMT
- Title: Capture the Flag: Uncovering Data Insights with Large Language Models
- Authors: Issam Laradji, Perouz Taslakian, Sai Rajeswar, Valentina Zantedeschi,
Alexandre Lacoste, Nicolas Chapados, David Vazquez, Christopher Pal,
Alexandre Drouin
- Abstract summary: This study explores the potential of using Large Language Models (LLMs) to automate the discovery of insights in data.
We propose a new evaluation methodology based on a "capture the flag" principle, measuring the ability of such models to recognize meaningful and pertinent information (flags) in a dataset.
- Score: 90.47038584812925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The extraction of a small number of relevant insights from vast amounts of
data is a crucial component of data-driven decision-making. However,
accomplishing this task requires considerable technical skills, domain
expertise, and human labor. This study explores the potential of using Large
Language Models (LLMs) to automate the discovery of insights in data,
leveraging recent advances in reasoning and code generation techniques. We
propose a new evaluation methodology based on a "capture the flag" principle,
measuring the ability of such models to recognize meaningful and pertinent
information (flags) in a dataset. We further propose two proof-of-concept
agents, with different inner workings, and compare their ability to capture
such flags in a real-world sales dataset. While the work reported here is
preliminary, our results are sufficiently interesting to mandate future
exploration by the community.
Related papers
- Using Large Language Models to Generate Authentic Multi-agent Knowledge Work Datasets [5.465422605475246]
Current publicly available knowledge work data collections lack diversity, extensive annotations, and contextual information about the users and their documents.
This paper introduces our approach's design and vision and focuses on generating authentic knowledge work documents using Large Language Models.
Our study involving human raters who assessed 53% of the generated and 74% of the real documents as realistic demonstrates the potential of our approach.
arXiv Detail & Related papers (2024-09-06T13:53:28Z) - DataAgent: Evaluating Large Language Models' Ability to Answer Zero-Shot, Natural Language Queries [0.0]
We evaluate OpenAI's GPT-3.5 as a "Language Data Scientist" (LDS)
The model was tested on a diverse set of benchmark datasets to evaluate its performance across multiple standards.
arXiv Detail & Related papers (2024-03-29T22:59:34Z) - Large Language Models(LLMs) on Tabular Data: Prediction, Generation, and Understanding -- A Survey [17.19337964440007]
There is currently a lack of comprehensive review that summarizes and compares the key techniques, metrics, datasets, models, and optimization approaches in this research domain.
This survey aims to address this gap by consolidating recent progress in these areas, offering a thorough survey and taxonomy of the datasets, metrics, and methodologies utilized.
It identifies strengths, limitations, unexplored territories, and gaps in the existing literature, while providing some insights for future research directions in this vital and rapidly evolving field.
arXiv Detail & Related papers (2024-02-27T23:59:01Z) - Learning to Extract Structured Entities Using Language Models [52.281701191329]
Recent advances in machine learning have significantly impacted the field of information extraction.
We reformulate the task to be entity-centric, enabling the use of diverse metrics.
We contribute to the field by introducing Structured Entity Extraction and proposing the Approximate Entity Set OverlaP metric.
arXiv Detail & Related papers (2024-02-06T22:15:09Z) - Query of CC: Unearthing Large Scale Domain-Specific Knowledge from
Public Corpora [104.16648246740543]
We propose an efficient data collection method based on large language models.
The method bootstraps seed information through a large language model and retrieves related data from public corpora.
It not only collects knowledge-related data for specific domains but unearths the data with potential reasoning procedures.
arXiv Detail & Related papers (2024-01-26T03:38:23Z) - Modeling Entities as Semantic Points for Visual Information Extraction
in the Wild [55.91783742370978]
We propose an alternative approach to precisely and robustly extract key information from document images.
We explicitly model entities as semantic points, i.e., center points of entities are enriched with semantic information describing the attributes and relationships of different entities.
The proposed method can achieve significantly enhanced performance on entity labeling and linking, compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-03-23T08:21:16Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - "FIJO": a French Insurance Soft Skill Detection Dataset [0.0]
This article proposes a new public dataset, FIJO, containing insurance job offers, including many soft skill annotations.
We present the results of skill detection algorithms using a named entity recognition approach and show that transformers-based models have good token-wise performances on this dataset.
arXiv Detail & Related papers (2022-04-11T15:54:22Z) - Data and its (dis)contents: A survey of dataset development and use in
machine learning research [11.042648980854487]
We survey the many concerns raised about the way we collect and use data in machine learning.
We advocate that a more cautious and thorough understanding of data is necessary to address several of the practical and ethical issues of the field.
arXiv Detail & Related papers (2020-12-09T22:13:13Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.