STRUX: An LLM for Decision-Making with Structured Explanations
- URL: http://arxiv.org/abs/2410.12583v1
- Date: Wed, 16 Oct 2024 14:01:22 GMT
- Title: STRUX: An LLM for Decision-Making with Structured Explanations
- Authors: Yiming Lu, Yebowen Hu, Hassan Foroosh, Wei Jin, Fei Liu,
- Abstract summary: We introduce a new framework called STRUX, which enhances LLM decision-making by providing structured explanations.
STRUX begins by distilling lengthy information into a concise table of key facts.
It then employs a series of self-reflection steps to determine which of these facts are pivotal, categorizing them as either favorable or adverse in relation to a specific decision.
- Score: 17.518955158367305
- License:
- Abstract: Countless decisions shape our daily lives, and it is paramount to understand the how and why behind these choices. In this paper, we introduce a new LLM decision-making framework called STRUX, which enhances LLM decision-making by providing structured explanations. These include favorable and adverse facts related to the decision, along with their respective strengths. STRUX begins by distilling lengthy information into a concise table of key facts. It then employs a series of self-reflection steps to determine which of these facts are pivotal, categorizing them as either favorable or adverse in relation to a specific decision. Lastly, we fine-tune an LLM to identify and prioritize these key facts to optimize decision-making. STRUX has been evaluated on the challenging task of forecasting stock investment decisions based on earnings call transcripts and demonstrated superior performance against strong baselines. It enhances decision transparency by allowing users to understand the impact of different factors, representing a meaningful step towards practical decision-making with LLMs.
Related papers
- Embodied Agent Interface: Benchmarking LLMs for Embodied Decision Making [85.24399869971236]
We aim to evaluate Large Language Models (LLMs) for embodied decision making.
Existing evaluations tend to rely solely on a final success rate.
We propose a generalized interface (Embodied Agent Interface) that supports the formalization of various types of tasks.
arXiv Detail & Related papers (2024-10-09T17:59:00Z) - Alignment Between the Decision-Making Logic of LLMs and Human Cognition: A Case Study on Legal LLMs [43.67312098562139]
This paper presents a method to evaluate the alignment between the decision-making logic of Large Language Models and human cognition.
We quantify the interactions encoded by the LLM as primitive decision-making logic.
Experiments show that even when the language generation results appear correct, a significant portion of the internal inference logic contains notable issues.
arXiv Detail & Related papers (2024-10-06T08:33:39Z) - DeFine: Enhancing LLM Decision-Making with Factor Profiles and Analogical Reasoning [35.9909472797192]
We introduce DeFine, a new framework that constructs probabilistic factor profiles from complex scenarios.
DeFine then integrates these profiles with analogical reasoning, leveraging insights from similar past experiences.
This approach is particularly useful in fields such as medical consultations, negotiations, and political debates.
arXiv Detail & Related papers (2024-10-02T17:29:34Z) - Cognitive LLMs: Towards Integrating Cognitive Architectures and Large Language Models for Manufacturing Decision-making [51.737762570776006]
LLM-ACTR is a novel neuro-symbolic architecture that provides human-aligned and versatile decision-making.
Our framework extracts and embeds knowledge of ACT-R's internal decision-making process as latent neural representations.
Our experiments on novel Design for Manufacturing tasks show both improved task performance as well as improved grounded decision-making capability.
arXiv Detail & Related papers (2024-08-17T11:49:53Z) - Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models [55.332004960574004]
Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established.
This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt.
We propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty.
arXiv Detail & Related papers (2024-07-20T11:19:58Z) - Argumentative Large Language Models for Explainable and Contestable Decision-Making [13.045050015831903]
Large language models (LLMs) are a promising candidate for use in decision-making.
They are limited by their inability to reliably provide outputs which are explainable and contestable.
We introduce argumentative LLMs, a method utilising LLMs to construct argumentation frameworks.
We demonstrate the effectiveness of argumentative LLMs experimentally in the decision-making task of claim verification.
arXiv Detail & Related papers (2024-05-03T13:12:28Z) - Determinants of LLM-assisted Decision-Making [0.0]
Large Language Models (LLMs) provide multifaceted support in enhancing human decision-making processes.
This study provides a structural overview and detailed analysis of determinants impacting decision-making with LLM support.
Our findings can be seen as crucial for improving decision quality in human-AI collaboration.
arXiv Detail & Related papers (2024-02-27T10:24:50Z) - FaithLM: Towards Faithful Explanations for Large Language Models [67.29893340289779]
Large Language Models (LLMs) have become proficient in addressing complex tasks by leveraging their internal knowledge and reasoning capabilities.
The black-box nature of these models complicates the task of explaining their decision-making processes.
We introduce FaithLM to explain the decision of LLMs with natural language (NL) explanations.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z) - Rational Decision-Making Agent with Internalized Utility Judgment [91.80700126895927]
Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of executing intricate multi-step decision-making tasks beyond traditional NLP applications.
This paper proposes RadAgent, which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning.
Experimental results on the ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks.
arXiv Detail & Related papers (2023-08-24T03:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.