Transfer learning extensions for the probabilistic classification vector
machine
- URL: http://arxiv.org/abs/2007.07090v1
- Date: Sat, 11 Jul 2020 08:35:10 GMT
- Title: Transfer learning extensions for the probabilistic classification vector
machine
- Authors: Christoph Raab and Frank-Michael Schleif
- Abstract summary: We propose two transfer learning extensions integrated into the sparse and interpretable probabilistic classification vector machine.
They are compared to standard benchmarks in the field and show their relevance either by sparsity or performance improvements.
- Score: 1.6244541005112747
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Transfer learning is focused on the reuse of supervised learning models in a
new context. Prominent applications can be found in robotics, image processing
or web mining. In these fields, the learning scenarios are naturally changing
but often remain related to each other motivating the reuse of existing
supervised models. Current transfer learning models are neither sparse nor
interpretable. Sparsity is very desirable if the methods have to be used in
technically limited environments and interpretability is getting more critical
due to privacy regulations. In this work, we propose two transfer learning
extensions integrated into the sparse and interpretable probabilistic
classification vector machine. They are compared to standard benchmarks in the
field and show their relevance either by sparsity or performance improvements.
Related papers
- Premonition: Using Generative Models to Preempt Future Data Changes in
Continual Learning [63.850451635362425]
Continual learning requires a model to adapt to ongoing changes in the data distribution.
We show that the combination of a large language model and an image generation model can similarly provide useful premonitions.
We find that the backbone of our pre-trained networks can learn representations useful for the downstream continual learning problem.
arXiv Detail & Related papers (2024-03-12T06:29:54Z) - Visual Affordance Prediction for Guiding Robot Exploration [56.17795036091848]
We develop an approach for learning visual affordances for guiding robot exploration.
We use a Transformer-based model to learn a conditional distribution in the latent embedding space of a VQ-VAE.
We show how the trained affordance model can be used for guiding exploration by acting as a goal-sampling distribution, during visual goal-conditioned policy learning in robotic manipulation.
arXiv Detail & Related papers (2023-05-28T17:53:09Z) - Maximizing Model Generalization for Machine Condition Monitoring with
Self-Supervised Learning and Federated Learning [4.214064911004321]
Deep Learning can diagnose faults and assess machine health from raw condition monitoring data without manually designed statistical features.
Traditional supervised learning may struggle to learn compact, discriminative representations that generalize to unseen target domains.
This study proposes focusing on maximizing the feature generality on the source domain and applying TL via weight transfer to copy the model to the target domain.
arXiv Detail & Related papers (2023-04-27T17:57:54Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - Learning to Generate Novel Classes for Deep Metric Learning [24.048915378172012]
We introduce a new data augmentation approach that synthesizes novel classes and their embedding vectors.
We implement this idea by learning and exploiting a conditional generative model, which, given a class label and a noise, produces a random embedding vector of the class.
Our proposed generator allows the loss to use richer class relations by augmenting realistic and diverse classes, resulting in better generalization to unseen samples.
arXiv Detail & Related papers (2022-01-04T06:55:19Z) - Detecting Bias in Transfer Learning Approaches for Text Classification [3.968023038444605]
In a supervised learning setting, labels are always needed for the classification task.
In this work, we evaluate some existing transfer learning approaches on detecting the bias of imbalanced classes.
arXiv Detail & Related papers (2021-02-03T15:48:21Z) - What is being transferred in transfer learning? [51.6991244438545]
We show that when training from pre-trained weights, the model stays in the same basin in the loss landscape.
We present that when training from pre-trained weights, the model stays in the same basin in the loss landscape and different instances of such model are similar in feature space and close in parameter space.
arXiv Detail & Related papers (2020-08-26T17:23:40Z) - Do Adversarially Robust ImageNet Models Transfer Better? [102.09335596483695]
adversarially robust models often perform better than their standard-trained counterparts when used for transfer learning.
Our results are consistent with (and in fact, add to) recent hypotheses stating that robustness leads to improved feature representations.
arXiv Detail & Related papers (2020-07-16T17:42:40Z) - Explicit Domain Adaptation with Loosely Coupled Samples [85.9511585604837]
We propose a transfer learning framework, core of which is learning an explicit mapping between domains.
Due to its interpretability, this is beneficial for safety-critical applications, like autonomous driving.
arXiv Detail & Related papers (2020-04-24T21:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.