Verification Learning: Make Unsupervised Neuro-Symbolic System Feasible
- URL: http://arxiv.org/abs/2503.12917v1
- Date: Mon, 17 Mar 2025 08:28:58 GMT
- Title: Verification Learning: Make Unsupervised Neuro-Symbolic System Feasible
- Authors: Lin-Han Jia, Wen-Chao Hu, Jie-Jing Shao, Lan-Zhe Guo, Yu-Feng Li,
- Abstract summary: This paper introduces a novel learning paradigm, Verification Learning (VL), which transforms the label-based reasoning process in Nesy into a label-free verification process.<n>VL achieves excellent learning results solely by relying on unlabeled data and a function that verifies whether the current predictions conform to the rules.<n>We validate the proposed framework through several fully unsupervised tasks including addition, sort, match, and chess, each showing significant performance and efficiency improvements.
- Score: 49.084647802326366
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The current Neuro-Symbolic (NeSy) Learning paradigm suffers from an over-reliance on labeled data. If we completely disregard labels, it leads to less symbol information, a larger solution space, and more shortcuts-issues that current Nesy systems cannot resolve. This paper introduces a novel learning paradigm, Verification Learning (VL), which addresses this challenge by transforming the label-based reasoning process in Nesy into a label-free verification process. VL achieves excellent learning results solely by relying on unlabeled data and a function that verifies whether the current predictions conform to the rules. We formalize this problem as a Constraint Optimization Problem (COP) and propose a Dynamic combinatorial Sorting (DCS) algorithm that accelerates the solution by reducing verification attempts, effectively lowering computational costs to the level of a Constraint Satisfaction Problem (CSP). To further enhance performance, we introduce a prior alignment method to address potential shortcuts. Our theoretical analysis points out which tasks in Nesy systems can be completed without labels and explains why rules can replace infinite labels, such as in addition, for some tasks, while for others, like Sudoku, the rules have no effect. We validate the proposed framework through several fully unsupervised tasks including addition, sort, match, and chess, each showing significant performance and efficiency improvements.
Related papers
- UPL: Uncertainty-aware Pseudo-labeling for Imbalance Transductive Node Classification [4.314840213630772]
We propose a simple and novel algorithm, Uncertainty-aware Pseudo-labeling (UPL)<n>Our approach leverages pseudo-labels assigned to unlabeled nodes to mitigate the adverse effects of imbalance on classification accuracy.<n>We evaluate the UPL algorithm across various benchmark datasets, demonstrating its superior performance compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2025-02-02T08:19:42Z) - An Embedding is Worth a Thousand Noisy Labels [0.11999555634662634]
We propose WANN, a weighted Adaptive Nearest Neighbor approach to address label noise.
We show WANN outperforms reference methods on diverse datasets of varying size and under various noise types and severities.
Our approach, emphasizing efficiency and explainability, emerges as a simple, robust solution to overcome inherent limitations of deep neural network training.
arXiv Detail & Related papers (2024-08-26T15:32:31Z) - Towards Improved Imbalance Robustness in Continual Multi-Label Learning
with Dual Output Spiking Architecture (DOSA) [3.7039357017353214]
This work proposes a dual output spiking architecture (DOSA) for continual multi-label learning.
A novel imbalance-aware loss function is also proposed, improving the multi-label classification performance of the model.
arXiv Detail & Related papers (2024-02-07T05:38:53Z) - Robust Representation Learning for Unreliable Partial Label Learning [86.909511808373]
Partial Label Learning (PLL) is a type of weakly supervised learning where each training instance is assigned a set of candidate labels, but only one label is the ground-truth.
This is known as Unreliable Partial Label Learning (UPLL) that introduces an additional complexity due to the inherent unreliability and ambiguity of partial labels.
We propose the Unreliability-Robust Representation Learning framework (URRL) that leverages unreliability-robust contrastive learning to help the model fortify against unreliable partial labels effectively.
arXiv Detail & Related papers (2023-08-31T13:37:28Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Communication-Efficient Robust Federated Learning with Noisy Labels [144.31995882209932]
Federated learning (FL) is a promising privacy-preserving machine learning paradigm over distributed located data.
We propose a learning-based reweighting approach to mitigate the effect of noisy labels in FL.
Our approach has shown superior performance on several real-world datasets compared to various baselines.
arXiv Detail & Related papers (2022-06-11T16:21:17Z) - Neural Active Learning with Performance Guarantees [37.16062387461106]
We investigate the problem of active learning in the streaming setting in non-parametric regimes, where the labels are generated from a class of functions on which we make no assumptions whatsoever.
We rely on recently proposed Neural Tangent Kernel (NTK) approximation tools to construct a suitable neural embedding that determines the feature space the algorithm operates on and the learned model computed atop.
arXiv Detail & Related papers (2021-06-06T20:44:23Z) - Iterative label cleaning for transductive and semi-supervised few-shot
learning [16.627512688664513]
Few-shot learning amounts to learning representations and acquiring knowledge such that novel tasks may be solved with both supervision and data being limited.
We introduce a new algorithm that leverages the manifold structure of the labeled and unlabeled data distribution to predict pseudo-labels.
Our solution surpasses or matches the state of the art results on four benchmark datasets.
arXiv Detail & Related papers (2020-12-14T21:54:11Z) - Semi-Supervised Learning with Meta-Gradient [123.26748223837802]
We propose a simple yet effective meta-learning algorithm in semi-supervised learning.
We find that the proposed algorithm performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2020-07-08T08:48:56Z) - Progressive Identification of True Labels for Partial-Label Learning [112.94467491335611]
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
Most existing methods elaborately designed as constrained optimizations that must be solved in specific manners, making their computational complexity a bottleneck for scaling up to big data.
This paper proposes a novel framework of classifier with flexibility on the model and optimization algorithm.
arXiv Detail & Related papers (2020-02-19T08:35:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.