Self-Supervised Features Improve Open-World Learning
- URL: http://arxiv.org/abs/2102.07848v1
- Date: Mon, 15 Feb 2021 21:03:05 GMT
- Title: Self-Supervised Features Improve Open-World Learning
- Authors: Akshay Raj Dhamija, Touqeer Ahmad, Jonathan Schwan, Mohsen Jafarzadeh,
Chunchun Li, Terrance E. Boult
- Abstract summary: We present an unifying open-world framework combining Incremental Learning, Out-of-Distribution detection and Open-World learning.
Under an unsupervised feature representation, we categorize the problem of detecting unknowns as either Out-of-Label-space or Out-of-Distribution detection.
The incremental learning component of our pipeline is a zero-exemplar online model which performs comparatively against state-of-the-art on ImageNet-100 protocol.
- Score: 13.880789191591088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This is a position paper that addresses the problem of Open-World learning
while proposing for the underlying feature representation to be learnt using
self-supervision. We also present an unifying open-world framework combining
three individual research dimensions which have been explored independently \ie
Incremental Learning, Out-of-Distribution detection and Open-World learning. We
observe that the supervised feature representations are limited and degenerate
for the Open-World setting and unsupervised feature representation is native to
each of these three problem domains. Under an unsupervised feature
representation, we categorize the problem of detecting unknowns as either
Out-of-Label-space or Out-of-Distribution detection, depending on the data used
during system training versus system testing. The incremental learning
component of our pipeline is a zero-exemplar online model which performs
comparatively against state-of-the-art on ImageNet-100 protocol and does not
require any back-propagation or retraining of the underlying deep-network. It
further outperforms the current state-of-the-art by simply using the same
number of exemplars as its counterparts. To evaluate our approach for
Open-World learning, we propose a new comprehensive protocol and evaluate its
performance in both Out-of-Label and Out-of-Distribution settings for each
incremental stage. We also demonstrate the adaptability of our approach by
showing how it can work as a plug-in with any of the recently proposed
self-supervised feature representation methods.
Related papers
- Uniting contrastive and generative learning for event sequences models [51.547576949425604]
This study investigates the integration of two self-supervised learning techniques - instance-wise contrastive learning and a generative approach based on restoring masked events in latent space.
Experiments conducted on several public datasets, focusing on sequence classification and next-event type prediction, show that the integrated method achieves superior performance compared to individual approaches.
arXiv Detail & Related papers (2024-08-19T13:47:17Z) - LoDisc: Learning Global-Local Discriminative Features for
Self-Supervised Fine-Grained Visual Recognition [18.442966979622717]
We present to incorporate the subtle local fine-grained feature learning into global self-supervised contrastive learning.
A novel pretext task called Local Discrimination (LoDisc) is proposed to explicitly supervise self-supervised model's focus towards local pivotal regions.
We show that Local Discrimination pretext task can effectively enhance fine-grained clues in important local regions, and the global-local framework further refines the fine-grained feature representations of images.
arXiv Detail & Related papers (2024-03-06T21:36:38Z) - Towards Unsupervised Representation Learning: Learning, Evaluating and
Transferring Visual Representations [1.8130068086063336]
We contribute to the field of unsupervised (visual) representation learning from three perspectives.
We design unsupervised, backpropagation-free Convolutional Self-Organizing Neural Networks (CSNNs)
We build upon the widely used (non-)linear evaluation protocol to define pretext- and target-objective-independent metrics.
We contribute CARLANE, the first 3-way sim-to-real domain adaptation benchmark for 2D lane detection, and a method based on self-supervised learning.
arXiv Detail & Related papers (2023-11-30T15:57:55Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Deepfake Detection via Joint Unsupervised Reconstruction and Supervised
Classification [25.84902508816679]
We introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously.
This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider.
Our method achieves state-of-the-art performance on three commonly-used datasets.
arXiv Detail & Related papers (2022-11-24T05:44:26Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Crowdsourcing Learning as Domain Adaptation: A Case Study on Named
Entity Recognition [19.379850806513232]
We take a different point in this work, regarding all crowdsourced annotations as gold-standard with respect to the individual annotators.
We find that crowdsourcing could be highly similar to domain adaptation, and then the recent advances of cross-domain methods can be almost directly applied to crowdsourcing.
We investigate both unsupervised and supervised crowdsourcing learning, assuming that no or only small-scale expert annotations are available.
arXiv Detail & Related papers (2021-05-31T14:11:08Z) - Distribution Alignment: A Unified Framework for Long-tail Visual
Recognition [52.36728157779307]
We propose a unified distribution alignment strategy for long-tail visual recognition.
We then introduce a generalized re-weight method in the two-stage learning to balance the class prior.
Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework.
arXiv Detail & Related papers (2021-03-30T14:09:53Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.