Learning Symbolic Representations Through Joint GEnerative and
DIscriminative Training
- URL: http://arxiv.org/abs/2304.11357v1
- Date: Sat, 22 Apr 2023 09:35:51 GMT
- Title: Learning Symbolic Representations Through Joint GEnerative and
DIscriminative Training
- Authors: Emanuele Sansone, Robin Manhaeve
- Abstract summary: GEDI is a Bayesian framework that combines self-supervised learning objectives with likelihood-based generative models.
We demonstrate GEDI outperforms existing self-supervised learning strategies in terms of clustering performance by a significant margin.
- Score: 3.6804038214708563
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce GEDI, a Bayesian framework that combines existing
self-supervised learning objectives with likelihood-based generative models.
This framework leverages the benefits of both GEnerative and DIscriminative
approaches, resulting in improved symbolic representations over standalone
solutions. Additionally, GEDI can be easily integrated and trained jointly with
existing neuro-symbolic frameworks without the need for additional supervision
or costly pre-training steps. We demonstrate through experiments on real-world
data, including SVHN, CIFAR10, and CIFAR100, that GEDI outperforms existing
self-supervised learning strategies in terms of clustering performance by a
significant margin. The symbolic component further allows it to leverage
knowledge in the form of logical constraints to improve performance in the
small data regime.
Related papers
- A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - A Bayesian Unification of Self-Supervised Clustering and Energy-Based
Models [11.007541337967027]
We perform a Bayesian analysis of state-of-the-art self-supervised learning objectives.
We show that our objective function allows to outperform existing self-supervised learning strategies.
We also demonstrate that GEDI can be integrated into a neuro-symbolic framework.
arXiv Detail & Related papers (2023-12-30T04:46:16Z) - Structural Adversarial Objectives for Self-Supervised Representation
Learning [19.471586646254373]
We propose objectives that task the discriminator for self-supervised representation learning via additional structural modeling responsibilities.
In combination with an efficient smoothness regularizer imposed on the network, these objectives guide the discriminator to learn to extract informative representations.
Experiments demonstrate that equipping GANs with our self-supervised objectives suffices to produce discriminators which, evaluated in terms of representation learning, compete with networks trained by contrastive learning approaches.
arXiv Detail & Related papers (2023-09-30T12:27:53Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - GEDI: GEnerative and DIscriminative Training for Self-Supervised
Learning [3.6804038214708563]
We study state-of-the-art self-supervised learning objectives and propose a unified formulation based on likelihood learning.
We refer to this combined framework as GEDI, which stands for GEnerative and DIscriminative training.
We show that GEDI outperforms existing self-supervised learning strategies in terms of clustering performance by a wide margin.
arXiv Detail & Related papers (2022-12-27T09:33:50Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Weakly Supervised Disentangled Representation for Goal-conditioned
Reinforcement Learning [15.698612710580447]
We propose a skill learning framework DR-GRL that aims to improve the sample efficiency and policy generalization.
In a weakly supervised manner, we propose a Spatial Transform AutoEncoder (STAE) to learn an interpretable and controllable representation.
We empirically demonstrate that DR-GRL significantly outperforms the previous methods in sample efficiency and policy generalization.
arXiv Detail & Related papers (2022-02-28T09:05:14Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.