Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation
- URL: http://arxiv.org/abs/2308.12595v1
- Date: Thu, 24 Aug 2023 06:50:07 GMT
- Title: Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation
- Authors: Chen Liang, Wenguan Wang, Jiaxu Miao, Yi Yang
- Abstract summary: LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
- Score: 85.12429517510311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in semi-supervised semantic segmentation have been heavily
reliant on pseudo labeling to compensate for limited labeled data, disregarding
the valuable relational knowledge among semantic concepts. To bridge this gap,
we devise LogicDiag, a brand new neural-logic semi-supervised learning
framework. Our key insight is that conflicts within pseudo labels, identified
through symbolic knowledge, can serve as strong yet commonly ignored learning
signals. LogicDiag resolves such conflicts via reasoning with logic-induced
diagnoses, enabling the recovery of (potentially) erroneous pseudo labels,
ultimately alleviating the notorious error accumulation problem. We showcase
the practical application of LogicDiag in the data-hungry segmentation
scenario, where we formalize the structured abstraction of semantic concepts as
a set of logic rules. Extensive experiments on three standard semi-supervised
semantic segmentation benchmarks demonstrate the effectiveness and generality
of LogicDiag. Moreover, LogicDiag highlights the promising opportunities
arising from the systematic integration of symbolic reasoning into the
prevalent statistical, neural learning approaches.
Related papers
- Neuro-symbolic Learning Yielding Logical Constraints [22.649543443988712]
end-to-end learning of neuro-symbolic systems is still an unsolved challenge.
We propose a framework that fuses the network, symbol grounding, and logical constraint synthesisto-end learning process.
arXiv Detail & Related papers (2024-10-28T12:18:25Z) - Semantic Objective Functions: A distribution-aware method for adding logical constraints in deep learning [4.854297874710511]
Constrained Learning and Knowledge Distillation techniques have shown promising results.
We propose a loss-based method that embeds knowledge-enforces logical constraints into a machine learning model.
We evaluate our method on a variety of learning tasks, including classification tasks with logic constraints.
arXiv Detail & Related papers (2024-05-03T19:21:47Z) - Learning with Logical Constraints but without Shortcut Satisfaction [23.219364371311084]
We present a new framework for learning with logical constraints.
Specifically, we address the shortcut satisfaction issue by introducing dual variables for logical connectives.
We propose a variational framework where the encoded logical constraint is expressed as a distributional loss.
arXiv Detail & Related papers (2024-03-01T07:17:20Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z) - Leveraging Unlabeled Data for Entity-Relation Extraction through
Probabilistic Constraint Satisfaction [54.06292969184476]
We study the problem of entity-relation extraction in the presence of symbolic domain knowledge.
Our approach employs semantic loss which captures the precise meaning of a logical sentence.
With a focus on low-data regimes, we show that semantic loss outperforms the baselines by a wide margin.
arXiv Detail & Related papers (2021-03-20T00:16:29Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Analyzing Differentiable Fuzzy Logic Operators [3.4806267677524896]
We study how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting.
We show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice.
arXiv Detail & Related papers (2020-02-14T16:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.