Learning from Noisy Crowd Labels with Logics
- URL: http://arxiv.org/abs/2302.06337v3
- Date: Sun, 19 Mar 2023 13:01:47 GMT
- Title: Learning from Noisy Crowd Labels with Logics
- Authors: Zhijun Chen, Hailong Sun, Haoqian He, Pengpeng Chen
- Abstract summary: We introduce Logic-guided Learning from Noisy Crowd Labels (Logic-LNCL), an EM-alike iterative logic knowledge distillation framework.
We show that the proposed framework improves the state-of-the-art and provides a new solution to learning from noisy crowd labels.
- Score: 10.574859201380612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the integration of symbolic logic knowledge into deep
neural networks for learning from noisy crowd labels. We introduce Logic-guided
Learning from Noisy Crowd Labels (Logic-LNCL), an EM-alike iterative logic
knowledge distillation framework that learns from both noisy labeled data and
logic rules of interest. Unlike traditional EM methods, our framework contains
a ``pseudo-E-step'' that distills from the logic rules a new type of learning
target, which is then used in the ``pseudo-M-step'' for training the
classifier. Extensive evaluations on two real-world datasets for text sentiment
classification and named entity recognition demonstrate that the proposed
framework improves the state-of-the-art and provides a new solution to learning
from noisy crowd labels.
Related papers
- Leveraging Label Semantics and Meta-Label Refinement for Multi-Label Question Classification [11.19022605804112]
This paper introduces RR2QC, a novel Retrieval Reranking method To multi-label Question Classification.
It uses label semantics and meta-label refinement to enhance personalized learning and resource recommendation.
Experimental results demonstrate that RR2QC outperforms existing classification methods in Precision@k and F1 scores.
arXiv Detail & Related papers (2024-11-04T06:27:14Z) - Reducing Labeling Costs in Sentiment Analysis via Semi-Supervised Learning [0.0]
This study explores label propagation in semi-supervised learning.
We employ a transductive label propagation method based on the manifold assumption for text classification.
By extending labels based on cosine proximity within a nearest neighbor graph from network embeddings, we combine unlabeled data into supervised learning.
arXiv Detail & Related papers (2024-10-15T07:25:33Z) - Text2Tree: Aligning Text Representation to the Label Tree Hierarchy for
Imbalanced Medical Classification [9.391704905671476]
This paper aims to rethink the data challenges in medical texts and present a novel framework-agnostic algorithm called Text2Tree.
We embed the ICD code tree structure of labels into cascade attention modules for learning hierarchy-aware label representations.
Two new learning schemes, Similarity Surrogate Learning (SSL) and Dissimilarity Mixup Learning (DML), are devised to boost text classification by reusing and distinguishing samples of other labels.
arXiv Detail & Related papers (2023-11-28T10:02:08Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Logic-induced Diagnostic Reasoning for Semi-supervised Semantic
Segmentation [85.12429517510311]
LogicDiag is a neural-logic semi-supervised learning framework for semantic segmentation.
Our key insight is that conflicts within pseudo labels, identified through symbolic knowledge, can serve as strong yet commonly ignored learning signals.
We showcase the practical application of LogicDiag in the data-hungry segmentation scenario, where we formalize the structured abstraction of semantic concepts as a set of logic rules.
arXiv Detail & Related papers (2023-08-24T06:50:07Z) - Channel-Wise Contrastive Learning for Learning with Noisy Labels [60.46434734808148]
We introduce channel-wise contrastive learning (CWCL) to distinguish authentic label information from noise.
Unlike conventional instance-wise contrastive learning (IWCL), CWCL tends to yield more nuanced and resilient features aligned with the authentic labels.
Our strategy is twofold: firstly, using CWCL to extract pertinent features to identify cleanly labeled samples, and secondly, progressively fine-tuning using these samples.
arXiv Detail & Related papers (2023-08-14T06:04:50Z) - Unleashing the Potential of Regularization Strategies in Learning with
Noisy Labels [65.92994348757743]
We demonstrate that a simple baseline using cross-entropy loss, combined with widely used regularization strategies can outperform state-of-the-art methods.
Our findings suggest that employing a combination of regularization strategies can be more effective than intricate algorithms in tackling the challenges of learning with noisy labels.
arXiv Detail & Related papers (2023-07-11T05:58:20Z) - Description-Enhanced Label Embedding Contrastive Learning for Text
Classification [65.01077813330559]
Self-Supervised Learning (SSL) in model learning process and design a novel self-supervised Relation of Relation (R2) classification task.
Relation of Relation Learning Network (R2-Net) for text classification, in which text classification and R2 classification are treated as optimization targets.
external knowledge from WordNet to obtain multi-aspect descriptions for label semantic learning.
arXiv Detail & Related papers (2023-06-15T02:19:34Z) - Transductive CLIP with Class-Conditional Contrastive Learning [68.51078382124331]
We propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch.
A class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels.
ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels.
arXiv Detail & Related papers (2022-06-13T14:04:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.