Combining Probabilistic Logic and Deep Learning for Self-Supervised
Learning
- URL: http://arxiv.org/abs/2107.12591v1
- Date: Tue, 27 Jul 2021 04:25:56 GMT
- Title: Combining Probabilistic Logic and Deep Learning for Self-Supervised
Learning
- Authors: Hoifung Poon, Hai Wang, Hunter Lang
- Abstract summary: Self-supervised learning has emerged as a promising direction to alleviate the supervision bottleneck.
We present deep probabilistic logic, which offers a unifying framework for task-specific self-supervision.
Next, we present self-supervised self-supervision(S4), which adds to DPL the capability to learn new self-supervision automatically.
- Score: 10.47937328610174
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has proven effective for various application tasks, but its
applicability is limited by the reliance on annotated examples. Self-supervised
learning has emerged as a promising direction to alleviate the supervision
bottleneck, but existing work focuses on leveraging co-occurrences in unlabeled
data for task-agnostic representation learning, as exemplified by masked
language model pretraining. In this chapter, we explore task-specific
self-supervision, which leverages domain knowledge to automatically annotate
noisy training examples for end applications, either by introducing labeling
functions for annotating individual instances, or by imposing constraints over
interdependent label decisions. We first present deep probabilistic logic(DPL),
which offers a unifying framework for task-specific self-supervision by
composing probabilistic logic with deep learning. DPL represents unknown labels
as latent variables and incorporates diverse self-supervision using
probabilistic logic to train a deep neural network end-to-end using variational
EM. Next, we present self-supervised self-supervision(S4), which adds to DPL
the capability to learn new self-supervision automatically. Starting from an
initial seed self-supervision, S4 iteratively uses the deep neural network to
propose new self supervision. These are either added directly (a form of
structured self-training) or verified by a human expert (as in feature-based
active learning). Experiments on real-world applications such as biomedical
machine reading and various text classification tasks show that task-specific
self-supervision can effectively leverage domain expertise and often match the
accuracy of supervised methods with a tiny fraction of human effort.
Related papers
- Unsupervised 3D registration through optimization-guided cyclical
self-training [71.75057371518093]
State-of-the-art deep learning-based registration methods employ three different learning strategies.
We propose a novel self-supervised learning paradigm for unsupervised registration, relying on self-training.
We evaluate the method for abdomen and lung registration, consistently surpassing metric-based supervision and outperforming diverse state-of-the-art competitors.
arXiv Detail & Related papers (2023-06-29T14:54:10Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Self-Supervised Multi-Object Tracking For Autonomous Driving From
Consistency Across Timescales [53.55369862746357]
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data.
However, their re-identification accuracy still falls short compared to their supervised counterparts.
We propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames.
arXiv Detail & Related papers (2023-04-25T20:47:29Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Self-supervised learning for joint SAR and multispectral land cover
classification [38.8529535887097]
We present a framework and specific tasks for self-supervised training of multichannel models.
We show that the proposed self-supervised approach is highly effective at learning features that correlate with the labels for land cover classification.
arXiv Detail & Related papers (2021-08-20T09:02:07Z) - Automated Self-Supervised Learning for Graphs [37.14382990139527]
This work aims to investigate how to automatically leverage multiple pretext tasks effectively.
We make use of a key principle of many real-world graphs, i.e., homophily, as the guidance to effectively search various self-supervised pretext tasks.
We propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks.
arXiv Detail & Related papers (2021-06-10T03:09:20Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Self-supervised self-supervision by combining deep learning and
probabilistic logic [10.515109852315168]
We propose Self-Supervised Self-Supervision (S4) to learn new self-supervision automatically.
S4 is able to automatically propose accurate self-supervision and can often nearly match the accuracy of supervised methods with a tiny fraction of the human effort.
arXiv Detail & Related papers (2020-12-23T04:06:41Z) - Aggregative Self-Supervised Feature Learning from a Limited Sample [12.555160911451688]
We propose two strategies of aggregation in terms of complementarity of various forms to boost the robustness of self-supervised learned features.
Our experiments on 2D natural image and 3D medical image classification tasks under limited data scenarios confirm that the proposed aggregation strategies successfully boost the classification accuracy.
arXiv Detail & Related papers (2020-12-14T12:49:37Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.