Semi Supervised Meta Learning for Spatiotemporal Learning
- URL: http://arxiv.org/abs/2308.01916v1
- Date: Sun, 9 Jul 2023 04:09:58 GMT
- Title: Semi Supervised Meta Learning for Spatiotemporal Learning
- Authors: Faraz Waseem, Pratyush Muthukumar
- Abstract summary: We seek to understand the impact of applying meta-learning to existing representation learning architectures.
We utilize Memory Augmented Neural Network (MANN) architecture to apply meta-learning to our framework.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We approached the goal of applying meta-learning to self-supervised masked
autoencoders for spatiotemporal learning in three steps. Broadly, we seek to
understand the impact of applying meta-learning to existing state-of-the-art
representation learning architectures. Thus, we test spatiotemporal learning
through: a meta-learning architecture only, a representation learning
architecture only, and an architecture applying representation learning
alongside a meta learning architecture. We utilize the Memory Augmented Neural
Network (MANN) architecture to apply meta-learning to our framework.
Specifically, we first experiment with applying a pre-trained MAE and
fine-tuning on our small-scale spatiotemporal dataset for video reconstruction
tasks. Next, we experiment with training an MAE encoder and applying a
classification head for action classification tasks. Finally, we experiment
with applying a pre-trained MAE and fine-tune with MANN backbone for action
classification tasks.
Related papers
- Towards Few-Annotation Learning in Computer Vision: Application to Image
Classification and Object Detection tasks [3.5353632767823506]
In this thesis, we develop theoretical, algorithmic and experimental contributions for Machine Learning with limited labels.
In a first contribution, we are interested in bridging the gap between theory and practice for popular Meta-Learning algorithms used in Few-Shot Classification.
To leverage unlabeled data when training object detectors based on the Transformer architecture, we propose both an unsupervised pretraining and a semi-supervised learning method.
arXiv Detail & Related papers (2023-11-08T18:50:04Z) - Learning to Learn from APIs: Black-Box Data-Free Meta-Learning [95.41441357931397]
Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-learning from a collection of pre-trained models without access to the training data.
Existing DFML work can only meta-learn from (i) white-box and (ii) small-scale pre-trained models.
We propose a Bi-level Data-free Meta Knowledge Distillation (BiDf-MKD) framework to transfer more general meta knowledge from a collection of black-box APIs to one single model.
arXiv Detail & Related papers (2023-05-28T18:00:12Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Incremental Learning with Differentiable Architecture and Forgetting
Search [3.6868861317674524]
We show that leveraging NAS for incremental learning results in strong performance gains for classification tasks.
We evaluate our method on both RF signal and image classification tasks, and demonstrate we can achieve up to a 10% performance increase over state-of-the-art methods.
arXiv Detail & Related papers (2022-05-19T21:47:26Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Rethinking Few-Shot Image Classification: a Good Embedding Is All You
Need? [72.00712736992618]
We show that a simple baseline: learning a supervised or self-supervised representation on the meta-training set, outperforms state-of-the-art few-shot learning methods.
An additional boost can be achieved through the use of self-distillation.
We believe that our findings motivate a rethinking of few-shot image classification benchmarks and the associated role of meta-learning algorithms.
arXiv Detail & Related papers (2020-03-25T17:58:42Z) - Following Instructions by Imagining and Reaching Visual Goals [8.19944635961041]
We present a novel framework for learning to perform temporally extended tasks using spatial reasoning.
Our framework operates on raw pixel images, assumes no prior linguistic or perceptual knowledge, and learns via intrinsic motivation.
We validate our method in two environments with a robot arm in a simulated interactive 3D environment.
arXiv Detail & Related papers (2020-01-25T23:26:56Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.