Enhancing Ethical Explanations of Large Language Models through
Iterative Symbolic Refinement
- URL: http://arxiv.org/abs/2402.00745v1
- Date: Thu, 1 Feb 2024 16:39:51 GMT
- Title: Enhancing Ethical Explanations of Large Language Models through
Iterative Symbolic Refinement
- Authors: Xin Quan, Marco Valentino, Louise A. Dennis, Andr\'e Freitas
- Abstract summary: This paper investigates how hybrid neuro-symbolic techniques can enhance the logical validity and alignment of ethical explanations.
We present an abductive-deductive framework named Logic-Explainer, which integrates Large Language Models with an external backward-chaining solver.
An empirical analysis demonstrates that Logic-Explainer can improve explanations generated via in-context learning methods and Chain-of-Thought.
- Score: 5.108863224378874
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An increasing amount of research in Natural Language Inference (NLI) focuses
on the application and evaluation of Large Language Models (LLMs) and their
reasoning capabilities. Despite their success, however, LLMs are still prone to
factual errors and inconsistencies in their explanations, offering limited
control and interpretability for inference in complex domains. In this paper,
we focus on ethical NLI, investigating how hybrid neuro-symbolic techniques can
enhance the logical validity and alignment of ethical explanations produced by
LLMs. Specifically, we present an abductive-deductive framework named
Logic-Explainer, which integrates LLMs with an external backward-chaining
solver to refine step-wise natural language explanations and jointly verify
their correctness, reduce incompleteness and minimise redundancy. An extensive
empirical analysis demonstrates that Logic-Explainer can improve explanations
generated via in-context learning methods and Chain-of-Thought (CoT) on
challenging ethical NLI tasks, while, at the same time, producing formal proofs
describing and supporting models' reasoning. As ethical NLI requires
commonsense reasoning to identify underlying moral violations, our results
suggest the effectiveness of neuro-symbolic methods for multi-step NLI more
broadly, opening new opportunities to enhance the logical consistency,
reliability, and alignment of LLMs.
Related papers
- LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.
We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.
We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Advancing Reasoning in Large Language Models: Promising Methods and Approaches [0.0]
Large Language Models (LLMs) have succeeded remarkably in various natural language processing (NLP) tasks.
Their ability to perform complex reasoning-spanning logical deduction, mathematical problem-solving, commonsense inference, and multi-step reasoning-often falls short of human expectations.
This survey provides a comprehensive review of emerging techniques enhancing reasoning in LLMs.
arXiv Detail & Related papers (2025-02-05T23:31:39Z) - JustLogic: A Comprehensive Benchmark for Evaluating Deductive Reasoning in Large Language Models [51.99046112135311]
We introduce JustLogic, a synthetically generated deductive reasoning benchmark for rigorous evaluation of Large Language Models.
JustLogic is highly complex, capable of generating a diverse range of linguistic patterns, vocabulary, and argument structures.
Our experimental results reveal that most state-of-the-art (SOTA) LLMs perform significantly worse than the human average.
arXiv Detail & Related papers (2025-01-24T15:49:10Z) - Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data [53.433309883370974]
This work explores the potential and limitations of using graph-based synthetic reasoning data as training signals to enhance Large Language Models' reasoning capabilities.
Our experiments, conducted on two established natural language reasoning tasks, demonstrate that supervised fine-tuning with synthetic graph-based reasoning data effectively enhances LLMs' reasoning performance without compromising their effectiveness on other standard evaluation benchmarks.
arXiv Detail & Related papers (2024-09-19T03:39:09Z) - Inductive Learning of Logical Theories with LLMs: An Expressivity-Graded Analysis [9.865771016218549]
This work presents a novel systematic methodology to analyse the capabilities and limitations of Large Language Models (LLMs)
The analysis is complexity-graded w.r.t. rule dependency structure, allowing quantification of specific inference challenges on LLM performance.
arXiv Detail & Related papers (2024-08-15T16:41:00Z) - Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving [13.485604499678262]
This paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs)
We present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences.
In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements.
arXiv Detail & Related papers (2024-05-02T15:20:01Z) - From Understanding to Utilization: A Survey on Explainability for Large
Language Models [27.295767173801426]
This survey underscores the imperative for increased explainability in Large Language Models (LLMs)
Our focus is primarily on pre-trained Transformer-based LLMs, which pose distinctive interpretability challenges due to their scale and complexity.
When considering the utilization of explainability, we explore several compelling methods that concentrate on model editing, control generation, and model enhancement.
arXiv Detail & Related papers (2024-01-23T16:09:53Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.