Adversarial Training for EM Classification Networks
- URL: http://arxiv.org/abs/2011.10615v1
- Date: Fri, 20 Nov 2020 20:11:58 GMT
- Title: Adversarial Training for EM Classification Networks
- Authors: Tom Grimes, Eric Church, William Pitts, Lynn Wood, Eva Brayfindley,
Luke Erikson, Mark Greaves
- Abstract summary: We present a novel variant of Domain Adversarial Networks.
New loss functions are defined for both forks of the DANN network.
It is possible to extend the concept of 'domain' to include arbitrary user defined labels.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel variant of Domain Adversarial Networks with impactful
improvements to the loss functions, training paradigm, and hyperparameter
optimization. New loss functions are defined for both forks of the DANN
network, the label predictor and domain classifier, in order to facilitate more
rapid gradient descent, provide more seamless integration into modern neural
networking frameworks, and allow previously unavailable inferences into network
behavior. Using these loss functions, it is possible to extend the concept of
'domain' to include arbitrary user defined labels applicable to subsets of the
training data, the test data, or both. As such, the network can be operated in
either 'On the Fly' mode where features provided by the feature extractor
indicative of differences between 'domain' labels in the training data are
removed or in 'Test Collection Informed' mode where features indicative of
difference between 'domain' labels in the combined training and test data are
removed (without needing to know or provide test activity labels to the
network). This work also draws heavily from previous works on Robust Training
which draws training examples from a L_inf ball around the training data in
order to remove fragile features induced by random fluctuations in the data. On
these networks we explore the process of hyperparameter optimization for both
the domain adversarial and robust hyperparameters. Finally, this network is
applied to the construction of a binary classifier used to identify the
presence of EM signal emitted by a turbopump. For this example, the effect of
the robust and domain adversarial training is to remove features indicative of
the difference in background between instances of operation of the device -
providing highly discriminative features on which to construct the classifier.
Related papers
- DDG-Net: Discriminability-Driven Graph Network for Weakly-supervised
Temporal Action Localization [40.521076622370806]
We propose Discriminability-Driven Graph Network (DDG-Net), which explicitly models ambiguous snippets and discriminative snippets with well-designed connections.
Experiments on THUMOS14 and ActivityNet1.2 benchmarks demonstrate the effectiveness of DDG-Net.
arXiv Detail & Related papers (2023-07-31T05:48:39Z) - Learning from Data with Noisy Labels Using Temporal Self-Ensemble [11.245833546360386]
Deep neural networks (DNNs) have an enormous capacity to memorize noisy labels.
Current state-of-the-art methods present a co-training scheme that trains dual networks using samples associated with small losses.
We propose a simple yet effective robust training scheme that operates by training only a single network.
arXiv Detail & Related papers (2022-07-21T08:16:31Z) - Random Feature Amplification: Feature Learning and Generalization in
Neural Networks [44.431266188350655]
We provide a characterization of the feature-learning process in two-layer ReLU networks trained by gradient descent.
We show that, although linear classifiers are no better than random guessing for the distribution we consider, two-layer ReLU networks trained by gradient descent achieve generalization error close to the label noise rate.
arXiv Detail & Related papers (2022-02-15T18:18:22Z) - Shuffle Augmentation of Features from Unlabeled Data for Unsupervised
Domain Adaptation [21.497019000131917]
Unsupervised Domain Adaptation (UDA) is a branch of transfer learning where labels for target samples are unavailable.
In this paper, we propose Shuffle Augmentation of Features (SAF) as a novel UDA framework.
SAF learns from the target samples, adaptively distills class-aware target features, and implicitly guides the classifier to find comprehensive class borders.
arXiv Detail & Related papers (2022-01-28T07:11:05Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Dual-Refinement: Joint Label and Feature Refinement for Unsupervised
Domain Adaptive Person Re-Identification [51.98150752331922]
Unsupervised domain adaptive (UDA) person re-identification (re-ID) is a challenging task due to the missing of labels for the target domain data.
We propose a novel approach, called Dual-Refinement, that jointly refines pseudo labels at the off-line clustering phase and features at the on-line training phase.
Our method outperforms the state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-26T07:35:35Z) - Self-Challenging Improves Cross-Domain Generalization [81.99554996975372]
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels.
We introduce a simple training, Self-Challenging Representation (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
RSC iteratively challenges the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels.
arXiv Detail & Related papers (2020-07-05T21:42:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.