Self-supervised self-supervision by combining deep learning and
probabilistic logic
- URL: http://arxiv.org/abs/2012.12474v1
- Date: Wed, 23 Dec 2020 04:06:41 GMT
- Title: Self-supervised self-supervision by combining deep learning and
probabilistic logic
- Authors: Hunter Lang, Hoifung Poon
- Abstract summary: We propose Self-Supervised Self-Supervision (S4) to learn new self-supervision automatically.
S4 is able to automatically propose accurate self-supervision and can often nearly match the accuracy of supervised methods with a tiny fraction of the human effort.
- Score: 10.515109852315168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Labeling training examples at scale is a perennial challenge in machine
learning. Self-supervision methods compensate for the lack of direct
supervision by leveraging prior knowledge to automatically generate noisy
labeled examples. Deep probabilistic logic (DPL) is a unifying framework for
self-supervised learning that represents unknown labels as latent variables and
incorporates diverse self-supervision using probabilistic logic to train a deep
neural network end-to-end using variational EM. While DPL is successful at
combining pre-specified self-supervision, manually crafting self-supervision to
attain high accuracy may still be tedious and challenging. In this paper, we
propose Self-Supervised Self-Supervision (S4), which adds to DPL the capability
to learn new self-supervision automatically. Starting from an initial "seed,"
S4 iteratively uses the deep neural network to propose new self supervision.
These are either added directly (a form of structured self-training) or
verified by a human expert (as in feature-based active learning). Experiments
show that S4 is able to automatically propose accurate self-supervision and can
often nearly match the accuracy of supervised methods with a tiny fraction of
the human effort.
Related papers
- Neuro-mimetic Task-free Unsupervised Online Learning with Continual
Self-Organizing Maps [56.827895559823126]
Self-organizing map (SOM) is a neural model often used in clustering and dimensionality reduction.
We propose a generalization of the SOM, the continual SOM, which is capable of online unsupervised learning under a low memory budget.
Our results, on benchmarks including MNIST, Kuzushiji-MNIST, and Fashion-MNIST, show almost a two times increase in accuracy.
arXiv Detail & Related papers (2024-02-19T19:11:22Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - Stabilizing Contrastive RL: Techniques for Robotic Goal Reaching from
Offline Data [101.43350024175157]
Self-supervised learning has the potential to decrease the amount of human annotation and engineering effort required to learn control strategies.
Our work builds on prior work showing that the reinforcement learning (RL) itself can be cast as a self-supervised problem.
We demonstrate that a self-supervised RL algorithm based on contrastive learning can solve real-world, image-based robotic manipulation tasks.
arXiv Detail & Related papers (2023-06-06T01:36:56Z) - Self-Supervised Multi-Object Tracking For Autonomous Driving From
Consistency Across Timescales [53.55369862746357]
Self-supervised multi-object trackers have tremendous potential as they enable learning from raw domain-specific data.
However, their re-identification accuracy still falls short compared to their supervised counterparts.
We propose a training objective that enables self-supervised learning of re-identification features from multiple sequential frames.
arXiv Detail & Related papers (2023-04-25T20:47:29Z) - SSL-Lanes: Self-Supervised Learning for Motion Forecasting in Autonomous
Driving [9.702784248870522]
Self-supervised learning (SSL) is an emerging technique to train convolutional neural networks (CNNs) and graph neural networks (GNNs)
In this study, we report the first systematic exploration of incorporating self-supervision into motion forecasting.
arXiv Detail & Related papers (2022-06-28T16:23:25Z) - Better Self-training for Image Classification through Self-supervision [3.492636597449942]
Self-supervision is learning without manual supervision by solving an automatically-generated pretext task.
This paper investigates three ways of incorporating self-supervision into self-training to improve accuracy in image classification.
arXiv Detail & Related papers (2021-09-02T08:24:41Z) - Combining Probabilistic Logic and Deep Learning for Self-Supervised
Learning [10.47937328610174]
Self-supervised learning has emerged as a promising direction to alleviate the supervision bottleneck.
We present deep probabilistic logic, which offers a unifying framework for task-specific self-supervision.
Next, we present self-supervised self-supervision(S4), which adds to DPL the capability to learn new self-supervision automatically.
arXiv Detail & Related papers (2021-07-27T04:25:56Z) - Online Adversarial Purification based on Self-Supervision [6.821598757786515]
We present Self-supervised Online Adrial Purification (SOAP), a novel defense strategy that uses a self-supervised loss to purify adversarial examples at test-time.
SOAP yields competitive robust accuracy against state-of-the-art adversarial training and purification methods.
To the best of our knowledge, our paper is the first that generalizes the idea of using self-supervised signals to perform online test-time purification.
arXiv Detail & Related papers (2021-01-23T00:19:52Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.