Transfer Learning for Aided Target Recognition: Comparing Deep Learning
to other Machine Learning Approaches
- URL: http://arxiv.org/abs/2011.12762v1
- Date: Wed, 25 Nov 2020 14:25:49 GMT
- Title: Transfer Learning for Aided Target Recognition: Comparing Deep Learning
to other Machine Learning Approaches
- Authors: Samuel Rivera, Olga Mendoza-Schrock, Ashley Diehl
- Abstract summary: Aided target recognition (AiTR) is an important problem with applications across industry and defense.
Deep learning (DL) provides exceptional modeling flexibility and accuracy on recent real world problems.
Our goal is to address this shortcoming by comparing transfer learning within a DL framework to other ML approaches across transfer tasks and datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aided target recognition (AiTR), the problem of classifying objects from
sensor data, is an important problem with applications across industry and
defense. While classification algorithms continue to improve, they often
require more training data than is available or they do not transfer well to
settings not represented in the training set. These problems are mitigated by
transfer learning (TL), where knowledge gained in a well-understood source
domain is transferred to a target domain of interest. In this context, the
target domain could represents a poorly-labeled dataset, a different sensor, or
an altogether new set of classes to identify.
While TL for classification has been an active area of machine learning (ML)
research for decades, transfer learning within a deep learning framework
remains a relatively new area of research. Although deep learning (DL) provides
exceptional modeling flexibility and accuracy on recent real world problems,
open questions remain regarding how much transfer benefit is gained by using DL
versus other ML architectures. Our goal is to address this shortcoming by
comparing transfer learning within a DL framework to other ML approaches across
transfer tasks and datasets. Our main contributions are: 1) an empirical
analysis of DL and ML algorithms on several transfer tasks and domains
including gene expressions and satellite imagery, and 2) a discussion of the
limitations and assumptions of TL for aided target recognition -- both for DL
and ML in general. We close with a discussion of future directions for DL
transfer.
Related papers
- Tabular Transfer Learning via Prompting LLMs [52.96022335067357]
We propose a novel framework, Prompt to Transfer (P2T), that utilizes unlabeled (or heterogeneous) source data with large language models (LLMs)
P2T identifies a column feature in a source dataset that is strongly correlated with a target task feature to create examples relevant to the target task, thus creating pseudo-demonstrations for prompts.
arXiv Detail & Related papers (2024-08-09T11:30:52Z) - CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification [68.19713459228369]
We compare transfer learning, meta-learning and contrastive learning against reference Machine Learning (ML) tree-based and monolithic DL models.
We show that (i) using large datasets we can obtain more general representations, (ii) contrastive learning is the best methodology.
While ML tree-based cannot handle large tasks but fits well small tasks, by means of reusing learned representations, DL methods are reaching tree-based models performance also for small tasks.
arXiv Detail & Related papers (2023-05-21T11:20:49Z) - Deep Transfer Learning for Automatic Speech Recognition: Towards Better
Generalization [3.6393183544320236]
Speech recognition has become an important challenge when using deep learning (DL)
It requires large-scale training datasets and high computational and storage resources.
Deep transfer learning (DTL) has been introduced to overcome these issues.
arXiv Detail & Related papers (2023-04-27T21:08:05Z) - Maximizing Model Generalization for Machine Condition Monitoring with
Self-Supervised Learning and Federated Learning [4.214064911004321]
Deep Learning can diagnose faults and assess machine health from raw condition monitoring data without manually designed statistical features.
Traditional supervised learning may struggle to learn compact, discriminative representations that generalize to unseen target domains.
This study proposes focusing on maximizing the feature generality on the source domain and applying TL via weight transfer to copy the model to the target domain.
arXiv Detail & Related papers (2023-04-27T17:57:54Z) - Unsupervised Domain Adaptation on Person Re-Identification via
Dual-level Asymmetric Mutual Learning [108.86940401125649]
This paper proposes a Dual-level Asymmetric Mutual Learning method (DAML) to learn discriminative representations from a broader knowledge scope with diverse embedding spaces.
The knowledge transfer between two networks is based on an asymmetric mutual learning manner.
Experiments in Market-1501, CUHK-SYSU, and MSMT17 public datasets verified the superiority of DAML over state-of-the-arts.
arXiv Detail & Related papers (2023-01-29T12:36:17Z) - Deep Learning and Traffic Classification: Lessons learned from a
commercial-grade dataset with hundreds of encrypted and zero-day applications [72.02908263225919]
We share our experience on a commercial-grade DL traffic classification engine.
We identify known applications from encrypted traffic, as well as unknown zero-day applications.
We propose a novel technique, tailored for DL models, that is significantly more accurate and light-weight than the state of the art.
arXiv Detail & Related papers (2021-04-07T15:21:22Z) - Detecting Bias in Transfer Learning Approaches for Text Classification [3.968023038444605]
In a supervised learning setting, labels are always needed for the classification task.
In this work, we evaluate some existing transfer learning approaches on detecting the bias of imbalanced classes.
arXiv Detail & Related papers (2021-02-03T15:48:21Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Distant Transfer Learning via Deep Random Walk [7.957823585750222]
We study distant transfer learning by proposing a DeEp Random Walk basEd distaNt Transfer (DERWENT) method.
Based on sequences identified by the random walk technique on a data graph, the proposed DERWENT model enforces adjacent data points in a squence to be similar.
Empirical studies on several benchmark datasets demonstrate that the proposed DERWENT algorithm yields the state-of-the-art performance.
arXiv Detail & Related papers (2020-06-13T11:31:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.