On Leveraging Large Language Models for Enhancing Entity Resolution
- URL: http://arxiv.org/abs/2401.03426v1
- Date: Sun, 7 Jan 2024 09:06:58 GMT
- Title: On Leveraging Large Language Models for Enhancing Entity Resolution
- Authors: Huahang Li, Longyu Feng, Shuangyin Li, Fei Hao, Chen Jason Zhang,
Yuanfeng Song, Lei Chen
- Abstract summary: We introduce strategies for the efficient utilization of Large Language Models (LLMs) in the entity resolution process.
Our approach optimally chooses the most effective matching questions while keep consumption limited to your budget.
We evaluate the effectiveness of our approach using entropy as a metric, and our experimental results demonstrate the efficiency and effectiveness of our proposed methods.
- Score: 11.668263762236343
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Entity resolution, the task of identifying and consolidating records that
pertain to the same real-world entity, plays a pivotal role in various sectors
such as e-commerce, healthcare, and law enforcement. The emergence of Large
Language Models (LLMs) like GPT-4 has introduced a new dimension to this task,
leveraging their advanced linguistic capabilities. This paper explores the
potential of LLMs in the entity resolution process, shedding light on both
their advantages and the computational complexities associated with large-scale
matching. We introduce strategies for the efficient utilization of LLMs,
including the selection of an optimal set of matching questions, namely MQsSP,
which is proved to be a NP-hard problem. Our approach optimally chooses the
most effective matching questions while keep consumption limited to your budget
. Additionally, we propose a method to adjust the distribution of possible
partitions after receiving responses from LLMs, with the goal of reducing the
uncertainty of entity resolution. We evaluate the effectiveness of our approach
using entropy as a metric, and our experimental results demonstrate the
efficiency and effectiveness of our proposed methods, offering promising
prospects for real-world applications.
Related papers
- Exploring and Benchmarking the Planning Capabilities of Large Language Models [57.23454975238014]
We construct a benchmark suite encompassing both classical planning domains and natural language scenarios.
Second, we investigate the use of in-context learning (ICL) to enhance LLM planning, exploring the direct relationship between increased context length and improved planning performance.
Third, we demonstrate the positive impact of fine-tuning LLMs on optimal planning paths, as well as the effectiveness of incorporating model-driven search procedures.
arXiv Detail & Related papers (2024-06-18T22:57:06Z) - Meta Reasoning for Large Language Models [58.87183757029041]
We introduce Meta-Reasoning Prompting (MRP), a novel and efficient system prompting method for large language models (LLMs)
MRP guides LLMs to dynamically select and apply different reasoning methods based on the specific requirements of each task.
We evaluate the effectiveness of MRP through comprehensive benchmarks.
arXiv Detail & Related papers (2024-06-17T16:14:11Z) - Adaptive Reinforcement Learning Planning: Harnessing Large Language Models for Complex Information Extraction [37.12990710443406]
Existing research on large language models (LLMs) shows that they can solve information extraction tasks through multi-step planning.
We observe that decomposing complex extraction tasks and extracting them step by step can effectively improve LLMs' performance.
This paper proposes a two-stage multi-step method for LLM-based information extraction and adopts the RL framework to execute the multi-step planning.
arXiv Detail & Related papers (2024-06-17T12:11:01Z) - Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning [79.38140606606126]
We propose an algorithmic framework that fine-tunes vision-language models (VLMs) with reinforcement learning (RL)
Our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning.
We demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks.
arXiv Detail & Related papers (2024-05-16T17:50:19Z) - Enhancing Decision-Making in Optimization through LLM-Assisted Inference: A Neural Networks Perspective [1.0420394952839245]
This paper explores the seamless integration of Generative AI (GenAI) and Evolutionary Algorithms (EAs)
Focusing on the transformative role of Large Language Models (LLMs), our study investigates the potential of LLM-Assisted Inference to automate and enhance decision-making processes.
arXiv Detail & Related papers (2024-05-12T08:22:53Z) - A Survey on Efficient Inference for Large Language Models [25.572035747669275]
Large Language Models (LLMs) have attracted extensive attention due to their remarkable performance across various tasks.
The substantial computational and memory requirements of LLM inference pose challenges for deployment in resource-constrained scenarios.
This paper presents a comprehensive survey of the existing literature on efficient LLM inference.
arXiv Detail & Related papers (2024-04-22T15:53:08Z) - From Large Language Models and Optimization to Decision Optimization
CoPilot: A Research Manifesto [2.4981381729038743]
We propose research at the intersection of Large Language Models and optimization to create a Decision Optimization CoPilot (DOCP)
DOCP is an AI tool designed to assist any decision maker, interacting in natural language to grasp the business problem, subsequently formulating and solving the corresponding optimization model.
We show that a) LLMs already provide substantial novel capabilities relevant to a DOCP, and b. major research challenges remain to be addressed.
arXiv Detail & Related papers (2024-02-26T03:10:11Z) - PhaseEvo: Towards Unified In-Context Prompt Optimization for Large
Language Models [9.362082187605356]
We present PhaseEvo, an efficient automatic prompt optimization framework that combines the generative capability of LLMs with the global search proficiency of evolution algorithms.
PhaseEvo significantly outperforms the state-of-the-art baseline methods by a large margin whilst maintaining good efficiency.
arXiv Detail & Related papers (2024-02-17T17:47:10Z) - Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration [83.4031923134958]
Corex is a suite of novel general-purpose strategies that transform Large Language Models into autonomous agents.
Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes.
We demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods.
arXiv Detail & Related papers (2023-09-30T07:11:39Z) - Robust Prompt Optimization for Large Language Models Against
Distribution Shifts [80.6757997074956]
Large Language Model (LLM) has demonstrated significant ability in various Natural Language Processing tasks.
We propose a new problem of robust prompt optimization for LLMs against distribution shifts.
This problem requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group.
arXiv Detail & Related papers (2023-05-23T11:30:43Z) - Towards Deployment-Efficient Reinforcement Learning: Lower Bound and
Optimality [141.89413461337324]
Deployment efficiency is an important criterion for many real-world applications of reinforcement learning (RL)
We propose a theoretical formulation for deployment-efficient RL (DE-RL) from an "optimization with constraints" perspective.
arXiv Detail & Related papers (2022-02-14T01:31:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.