Logic-Scaffolding: Personalized Aspect-Instructed Recommendation
Explanation Generation using LLMs
- URL: http://arxiv.org/abs/2312.14345v2
- Date: Wed, 17 Jan 2024 22:05:50 GMT
- Title: Logic-Scaffolding: Personalized Aspect-Instructed Recommendation
Explanation Generation using LLMs
- Authors: Behnam Rahdari, Hao Ding, Ziwei Fan, Yifei Ma, Zhuotong Chen, Anoop
Deoras and Branislav Kveton
- Abstract summary: We propose a framework called Logic-Scaffolding, that combines the ideas of aspect-based explanation and chain-of-thought prompting to generate explanations through intermediate reasoning steps.
In this paper, we share our experience in building the framework and present an interactive demonstration for exploring our results.
- Score: 20.446594942586604
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The unique capabilities of Large Language Models (LLMs), such as the natural
language text generation ability, position them as strong candidates for
providing explanation for recommendations. However, despite the size of the
LLM, most existing models struggle to produce zero-shot explanations reliably.
To address this issue, we propose a framework called Logic-Scaffolding, that
combines the ideas of aspect-based explanation and chain-of-thought prompting
to generate explanations through intermediate reasoning steps. In this paper,
we share our experience in building the framework and present an interactive
demonstration for exploring our results.
Related papers
- PromptExp: Multi-granularity Prompt Explanation of Large Language Models [16.259208045898415]
We introduce PromptExp, a framework for multi-granularity prompt explanations by aggregating token-level insights.
PromptExp supports both white-box and black-box explanations and extends explanations to higher granularity levels.
We evaluate PromptExp in case studies such as sentiment analysis, showing the perturbation-based approach performs best.
arXiv Detail & Related papers (2024-10-16T22:25:15Z) - Thought-Like-Pro: Enhancing Reasoning of Large Language Models through Self-Driven Prolog-based Chain-of-Thought [31.964412924094656]
Large language models (LLMs) have shown exceptional performance as general-purpose assistants.
We introduce a novel learning framework, THOUGHT-LIKE-PRO, to facilitate learning and generalization across diverse reasoning tasks.
Our empirical findings indicate that our proposed approach substantially enhances the reasoning abilities of LLMs.
arXiv Detail & Related papers (2024-07-18T18:52:10Z) - Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving [13.485604499678262]
This paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs)
We present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences.
In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements.
arXiv Detail & Related papers (2024-05-02T15:20:01Z) - A Principled Framework for Knowledge-enhanced Large Language Model [58.1536118111993]
Large Language Models (LLMs) are versatile, yet they often falter in tasks requiring deep and reliable reasoning.
This paper introduces a rigorously designed framework for creating LLMs that effectively anchor knowledge and employ a closed-loop reasoning process.
arXiv Detail & Related papers (2023-11-18T18:10:02Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - Explanation-aware Soft Ensemble Empowers Large Language Model In-context
Learning [50.00090601424348]
Large language models (LLMs) have shown remarkable capabilities in various natural language understanding tasks.
We propose EASE, an Explanation-Aware Soft Ensemble framework to empower in-context learning with LLMs.
arXiv Detail & Related papers (2023-11-13T06:13:38Z) - In-Context Explainers: Harnessing LLMs for Explaining Black Box Models [28.396104334980492]
Large Language Models (LLMs) have demonstrated exceptional capabilities in complex tasks like machine translation, commonsense reasoning, and language understanding.
One of the primary reasons for the adaptability of LLMs in such diverse tasks is their in-context learning (ICL) capability, which allows them to perform well on new tasks by simply using a few task samples in the prompt.
We propose a novel framework, In-Context Explainers, comprising of three novel approaches that exploit the ICL capabilities of LLMs to explain the predictions made by other predictive models.
arXiv Detail & Related papers (2023-10-09T15:31:03Z) - Large Language Models as Analogical Reasoners [155.9617224350088]
Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks.
We introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models.
arXiv Detail & Related papers (2023-10-03T00:57:26Z) - Towards LLM-guided Causal Explainability for Black-box Text Classifiers [16.36602400590088]
We aim to leverage the instruction-following and textual understanding capabilities of recent Large Language Models to facilitate causal explainability.
We propose a three-step pipeline via which, we use an off-the-shelf LLM to identify the latent or unobserved features in the input text.
We experiment with our pipeline on multiple NLP text classification datasets, and present interesting and promising findings.
arXiv Detail & Related papers (2023-09-23T11:22:28Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.