LogicNet: A Logical Consistency Embedded Face Attribute Learning Network
- URL: http://arxiv.org/abs/2311.11208v2
- Date: Sat, 21 Sep 2024 23:17:45 GMT
- Title: LogicNet: A Logical Consistency Embedded Face Attribute Learning Network
- Authors: Haiyu Wu, Sicong Tian, Huayu Li, Kevin W. Bowyer,
- Abstract summary: We introduce two pressing challenges to the field: How can we ensure that a model, when trained with data checked for logical consistency, yields predictions that are consistent?
We propose LogicNet, an adversarial training framework that learns the logical relationships between attributes.
In real-world case analysis, our approach can achieve a reduction of more than 50% in the average number of failed cases compared to other methods.
- Score: 6.202893610948486
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensuring logical consistency in predictions is a crucial yet overlooked aspect in multi-attribute classification. We explore the potential reasons for this oversight and introduce two pressing challenges to the field: 1) How can we ensure that a model, when trained with data checked for logical consistency, yields predictions that are logically consistent? 2) How can we achieve the same with data that hasn't undergone logical consistency checks? Minimizing manual effort is also essential for enhancing automation. To address these challenges, we introduce two datasets, FH41K and CelebA-logic, and propose LogicNet, an adversarial training framework that learns the logical relationships between attributes. Accuracy of LogicNet surpasses that of the next-best approach by 23.05%, 9.96%, and 1.71% on FH37K, FH41K, and CelebA-logic, respectively. In real-world case analysis, our approach can achieve a reduction of more than 50% in the average number of failed cases compared to other methods.
Related papers
- ReasoningV: Efficient Verilog Code Generation with Adaptive Hybrid Reasoning Model [7.798551697095774]
ReasoningV is a novel model that integrates trained intrinsic capabilities with dynamic inference adaptation for Verilog code generation.
Our framework introduces three complementary innovations: ReasoningV-5K, a high-quality dataset of 5,000 functionally verified instances with reasoning paths created through multi-dimensional filtering of PyraNet samples.
Experimental results demonstrate ReasoningV's effectiveness with a pass@1 accuracy of 57.8% on VerilogEval-human.
arXiv Detail & Related papers (2025-04-20T10:16:59Z) - LogicTree: Structured Proof Exploration for Coherent and Rigorous Logical Reasoning with Large Language Models [7.967925911756304]
LogicTree is an inference-time modular framework employing algorithm-guided search to automate structured proof exploration.
We introduce two-free derivations for premise prioritization, enabling strategic proof search.
Within LogicTree, GPT-4o outperforms o3-mini by 7.6% on average.
arXiv Detail & Related papers (2025-04-18T22:10:02Z) - Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM's Reasoning Capability [53.51560766150442]
Critical tokens are elements within reasoning trajectories that significantly influence incorrect outcomes.
We present a novel framework for identifying these tokens through rollout sampling.
We show that identifying and replacing critical tokens significantly improves model accuracy.
arXiv Detail & Related papers (2024-11-29T18:58:22Z) - NL2FOL: Translating Natural Language to First-Order Logic for Logical Fallacy Detection [45.28949266878263]
We design a process to reliably detect logical fallacies by translating natural language to First-order Logic.
We then utilize Satisfiability Modulo Theory (SMT) solvers to reason about the validity of the formula.
Our approach is robust, interpretable and does not require training data or fine-tuning.
arXiv Detail & Related papers (2024-04-18T00:20:48Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Investigating the Robustness of Natural Language Generation from Logical
Forms via Counterfactual Samples [30.079030298066847]
State-of-the-art methods based on pre-trained models have achieved remarkable performance on the standard test dataset.
We question whether these methods really learn how to perform logical reasoning, rather than just relying on the spurious correlations between the headers of the tables and operators of the logical form.
We propose two approaches to reduce the model's reliance on the shortcut.
arXiv Detail & Related papers (2022-10-16T14:14:53Z) - MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning [63.50909998372667]
We propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text.
Two novel strategies serve as indispensable components of our method.
arXiv Detail & Related papers (2022-03-01T11:13:00Z) - Logical Credal Networks [87.25387518070411]
This paper introduces Logical Credal Networks, an expressive probabilistic logic that generalizes many prior models that combine logic and probability.
We investigate its performance on maximum a posteriori inference tasks, including solving Mastermind games with uncertainty and detecting credit card fraud.
arXiv Detail & Related papers (2021-09-25T00:00:47Z) - Logic-Consistency Text Generation from Semantic Parses [32.543257899910216]
This paper first proposes SNOWBALL, a framework for logic consistent text generation from semantic parses.
Second, we propose a novel automatic metric, BLEC, for evaluating the logical consistency between the semantic parses and generated texts.
arXiv Detail & Related papers (2021-08-02T01:12:18Z) - Logic-Driven Context Extension and Data Augmentation for Logical
Reasoning of Text [65.24325614642223]
We propose to understand logical symbols and expressions in the text to arrive at the answer.
Based on such logical information, we put forward a context extension framework and a data augmentation algorithm.
Our method achieves the state-of-the-art performance, and both logic-driven context extension framework and data augmentation algorithm can help improve the accuracy.
arXiv Detail & Related papers (2021-05-08T10:09:36Z) - Logic-Guided Data Augmentation and Regularization for Consistent
Question Answering [55.05667583529711]
This paper addresses the problem of improving the accuracy and consistency of responses to comparison questions.
Our method leverages logical and linguistic knowledge to augment labeled training data and then uses a consistency-based regularizer to train the model.
arXiv Detail & Related papers (2020-04-21T17:03:08Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.