Self-supervised remote sensing feature learning: Learning Paradigms,
Challenges, and Future Works
- URL: http://arxiv.org/abs/2211.08129v1
- Date: Tue, 15 Nov 2022 13:32:22 GMT
- Title: Self-supervised remote sensing feature learning: Learning Paradigms,
Challenges, and Future Works
- Authors: Chao Tao, Ji Qi, Mingning Guo, Qing Zhu, Haifeng Li
- Abstract summary: This paper analyzes and compares three feature learning paradigms: unsupervised feature learning (USFL), supervised feature learning (SFL), and self-supervised feature learning (SSFL)
Under this unified framework, we analyze the advantages of SSFL over the other two learning paradigms in RSIs understanding tasks.
We analyze the effect of SSFL signals and pre-training data on the learned features to provide insights for improving the RSI feature learning.
- Score: 9.36487195178422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved great success in learning features from massive
remote sensing images (RSIs). To better understand the connection between
feature learning paradigms (e.g., unsupervised feature learning (USFL),
supervised feature learning (SFL), and self-supervised feature learning
(SSFL)), this paper analyzes and compares them from the perspective of feature
learning signals, and gives a unified feature learning framework. Under this
unified framework, we analyze the advantages of SSFL over the other two
learning paradigms in RSIs understanding tasks and give a comprehensive review
of the existing SSFL work in RS, including the pre-training dataset,
self-supervised feature learning signals, and the evaluation methods. We
further analyze the effect of SSFL signals and pre-training data on the learned
features to provide insights for improving the RSI feature learning. Finally,
we briefly discuss some open problems and possible research directions.
Related papers
- Language-guided Skill Learning with Temporal Variational Inference [38.733622157088035]
We present an algorithm for skill discovery from expert demonstrations.
Our results demonstrate that agents equipped with our method are able to discover skills that help accelerate learning.
arXiv Detail & Related papers (2024-02-26T07:19:23Z) - C-ICL: Contrastive In-context Learning for Information Extraction [54.39470114243744]
c-ICL is a novel few-shot technique that leverages both correct and incorrect sample constructions to create in-context learning demonstrations.
Our experiments on various datasets indicate that c-ICL outperforms previous few-shot in-context learning methods.
arXiv Detail & Related papers (2024-02-17T11:28:08Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Functional Knowledge Transfer with Self-supervised Representation
Learning [11.566644244783305]
This work investigates the unexplored usability of self-supervised representation learning in the direction of functional knowledge transfer.
In this work, functional knowledge transfer is achieved by joint optimization of self-supervised learning pseudo task and supervised learning task.
arXiv Detail & Related papers (2023-03-12T21:14:59Z) - A Domain-Agnostic Approach for Characterization of Lifelong Learning
Systems [128.63953314853327]
"Lifelong Learning" systems are capable of 1) Continuous Learning, 2) Transfer and Adaptation, and 3) Scalability.
We show that this suite of metrics can inform the development of varied and complex Lifelong Learning systems.
arXiv Detail & Related papers (2023-01-18T21:58:54Z) - Feature Forgetting in Continual Representation Learning [48.89340526235304]
representations do not suffer from "catastrophic forgetting" even in plain continual learning, but little further fact is known about its characteristics.
We devise a protocol for evaluating representation in continual learning, and then use it to present an overview of the basic trends of continual representation learning.
To study the feature forgetting problem, we create a synthetic dataset to identify and visualize the prevalence of feature forgetting in neural networks.
arXiv Detail & Related papers (2022-05-26T13:38:56Z) - Contrastive Learning with Boosted Memorization [36.957895270908324]
Self-supervised learning has achieved a great success in the representation learning of visual and textual data.
Recent attempts to consider self-supervised long-tailed learning are made by rebalancing in the loss perspective or the model perspective.
We propose a novel Boosted Contrastive Learning (BCL) method to enhance the long-tailed learning in the label-unaware context.
arXiv Detail & Related papers (2022-05-25T11:54:22Z) - DMCNet: Diversified Model Combination Network for Understanding
Engagement from Video Screengrabs [0.4397520291340695]
Engagement plays a major role in developing intelligent educational interfaces.
Non-deep learning models are based on the combination of popular algorithms such as Histogram of Oriented Gradient (HOG), Support Vector Machine (SVM), Scale Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF)
The deep learning methods include Densely Connected Convolutional Networks (DenseNet-121), Residual Network (ResNet-18) and MobileNetV1.
arXiv Detail & Related papers (2022-04-13T15:24:38Z) - On Exploring Pose Estimation as an Auxiliary Learning Task for
Visible-Infrared Person Re-identification [66.58450185833479]
In this paper, we exploit Pose Estimation as an auxiliary learning task to assist the VI-ReID task in an end-to-end framework.
By jointly training these two tasks in a mutually beneficial manner, our model learns higher quality modality-shared and ID-related features.
Experimental results on two benchmark VI-ReID datasets show that the proposed method consistently improves state-of-the-art methods by significant margins.
arXiv Detail & Related papers (2022-01-11T09:44:00Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Toward Understanding the Feature Learning Process of Self-supervised
Contrastive Learning [43.504548777955854]
We study how contrastive learning learns the feature representations for neural networks by analyzing its feature learning process.
We prove that contrastive learning using textbfReLU networks provably learns the desired sparse features if proper augmentations are adopted.
arXiv Detail & Related papers (2021-05-31T16:42:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.