Zero-Shot Classification by Logical Reasoning on Natural Language
Explanations
- URL: http://arxiv.org/abs/2211.03252v2
- Date: Thu, 25 May 2023 06:01:12 GMT
- Title: Zero-Shot Classification by Logical Reasoning on Natural Language
Explanations
- Authors: Chi Han, Hengzhi Pei, Xinya Du, Heng Ji
- Abstract summary: We propose the framework CLORE (Classification by LOgical Reasoning on Explanations)
CLORE parses explanations into logical structures and then explicitly reasons along thess structures on the input to produce a classification score.
We also demonstrate that our framework can be extended to zero-shot classification on visual modality.
- Score: 56.42922904777717
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans can classify data of an unseen category by reasoning on its language
explanations. This ability is owing to the compositional nature of language: we
can combine previously seen attributes to describe the new category. For
example, we might describe a sage thrasher as "it has a slim straight
relatively short bill, yellow eyes and a long tail", so that others can use
their knowledge of attributes "slim straight relatively short bill", "yellow
eyes" and "long tail" to recognize a sage thrasher. Inspired by this
observation, in this work we tackle zero-shot classification task by logically
parsing and reasoning on natural language expla-nations. To this end, we
propose the framework CLORE (Classification by LOgical Reasoning on
Explanations). While previous methods usually regard textual information as
implicit features, CLORE parses explanations into logical structures and then
explicitly reasons along thess structures on the input to produce a
classification score. Experimental results on explanation-based zero-shot
classification benchmarks demonstrate that CLORE is superior to baselines,
which we further show mainly comes from higher scores on tasks requiring more
logical reasoning. We also demonstrate that our framework can be extended to
zero-shot classification on visual modality. Alongside classification
decisions, CLORE can provide the logical parsing and reasoning process as a
clear form of rationale. Through empirical analysis we demonstrate that CLORE
is also less affected by linguistic biases than baselines.
Related papers
- P-FOLIO: Evaluating and Improving Logical Reasoning with Abundant Human-Written Reasoning Chains [97.25943550933829]
We present P-FOLIO, a human-annotated dataset consisting of diverse and complex reasoning chains.
We use P-FOLIO to evaluate and improve large-language-model (LLM) reasoning capabilities.
arXiv Detail & Related papers (2024-10-11T19:22:57Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - FLamE: Few-shot Learning from Natural Language Explanations [12.496665033682202]
We present FLamE, a framework for learning from natural language explanations.
Experiments on natural language inference demonstrate effectiveness over strong baselines.
Human evaluation surprisingly reveals that the majority of generated explanations does not adequately justify classification decisions.
arXiv Detail & Related papers (2023-06-13T18:01:46Z) - Extending Logic Explained Networks to Text Classification [5.289706151390118]
We propose LENp, improving local explanations by perturbing input words, and we test it on text classification.
Our results show that (i) LENp provides better local explanations than LIME in terms of sensitivity and faithfulness, and (ii) logic explanations are more useful and user-friendly than feature scoring provided by LIME as attested by a human survey.
arXiv Detail & Related papers (2022-11-04T16:12:03Z) - Machine Reading, Fast and Slow: When Do Models "Understand" Language? [59.897515617661874]
We investigate the behavior of reading comprehension models with respect to two linguistic'skills': coreference resolution and comparison.
We find that for comparison (but not coreference) the systems based on larger encoders are more likely to rely on the 'right' information.
arXiv Detail & Related papers (2022-09-15T16:25:44Z) - CLUES: A Benchmark for Learning Classifiers using Natural Language
Explanations [12.278877764015725]
Supervised learning has traditionally focused on inductive learning by observing labeled examples of a task.
In contrast, humans have ability to learn new concepts from language.
We introduce CLUES, benchmark for learning using natural language ExplanationS.
CLUES consists of 36 real-world and 144 synthetic classification tasks.
arXiv Detail & Related papers (2022-04-14T17:54:46Z) - Fact-driven Logical Reasoning for Machine Reading Comprehension [82.58857437343974]
We are motivated to cover both commonsense and temporary knowledge clues hierarchically.
Specifically, we propose a general formalism of knowledge units by extracting backbone constituents of the sentence.
We then construct a supergraph on top of the fact units, allowing for the benefit of sentence-level (relations among fact groups) and entity-level interactions.
arXiv Detail & Related papers (2021-05-21T13:11:13Z) - LOREN: Logic Enhanced Neural Reasoning for Fact Verification [24.768868510218002]
We propose LOREN, a novel approach for fact verification that integrates Logic guided Reasoning and Neural inference.
Instead of directly validating a single reasoning unit, LOREN turns it into a question-answering task.
Experiments show that our proposed LOREN outperforms other previously published methods and achieves 73.43% of the FEVER score.
arXiv Detail & Related papers (2020-12-25T13:57:04Z) - Natural Language Rationales with Full-Stack Visual Reasoning: From
Pixels to Semantic Frames to Commonsense Graphs [106.15931418425906]
We present the first study focused on generating natural language rationales across several complex visual reasoning tasks.
We present RationaleVT Transformer, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs.
Our experiments show that the base pretrained language model benefits from visual adaptation and that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks.
arXiv Detail & Related papers (2020-10-15T05:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.