Domain adaptation techniques for improved cross-domain study of galaxy
mergers
- URL: http://arxiv.org/abs/2011.03591v3
- Date: Fri, 13 Nov 2020 23:36:52 GMT
- Title: Domain adaptation techniques for improved cross-domain study of galaxy
mergers
- Authors: A. \'Ciprijanovi\'c and D. Kafkes and S. Jenkins and K. Downey and G.
N. Perdue and S. Madireddy and T. Johnston and B. Nord
- Abstract summary: In astronomy, neural networks are often trained on simulated data with the prospect of being applied to real observations.
Here we demonstrate the use of two techniques - Maximum Mean Discrepancy (MMD) and adversarial training with Domain Adversarial Neural Networks (DANN)
We show how the addition of either MMD or adversarial training greatly improves the performance of the classifier on the target domain when compared to conventional machine learning algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In astronomy, neural networks are often trained on simulated data with the
prospect of being applied to real observations. Unfortunately, simply training
a deep neural network on images from one domain does not guarantee satisfactory
performance on new images from a different domain. The ability to share
cross-domain knowledge is the main advantage of modern deep domain adaptation
techniques. Here we demonstrate the use of two techniques - Maximum Mean
Discrepancy (MMD) and adversarial training with Domain Adversarial Neural
Networks (DANN) - for the classification of distant galaxy mergers from the
Illustris-1 simulation, where the two domains presented differ only due to
inclusion of observational noise. We show how the addition of either MMD or
adversarial training greatly improves the performance of the classifier on the
target domain when compared to conventional machine learning algorithms,
thereby demonstrating great promise for their use in astronomy.
Related papers
- Adversarially Masked Video Consistency for Unsupervised Domain Adaptation [11.947273267877208]
We study the problem of unsupervised domain adaptation for egocentric videos.
We propose a transformer-based model to learn class-discriminative and domain-invariant feature representations.
arXiv Detail & Related papers (2024-03-24T17:13:46Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Heterogeneous Domain Adaptation with Adversarial Neural Representation
Learning: Experiments on E-Commerce and Cybersecurity [7.748670137746999]
Heterogeneous Adversarial Neural Domain Adaptation (HANDA) is designed to maximize the transferability in heterogeneous environments.
Three experiments were conducted to evaluate the performance against the state-of-the-art HDA methods on major image and text e-commerce benchmarks.
arXiv Detail & Related papers (2022-05-05T16:57:36Z) - Domain-Invariant Proposals based on a Balanced Domain Classifier for
Object Detection [8.583307102907295]
Object recognition from images means to automatically find object(s) of interest and to return their category and location information.
Benefiting from research on deep learning, like convolutional neural networks(CNNs) and generative adversarial networks, the performance in this field has been improved significantly.
mismatching distributions, i.e., domain shifts, lead to a significant performance drop.
arXiv Detail & Related papers (2022-02-12T00:21:27Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Boosting Binary Masks for Multi-Domain Learning through Affine
Transformations [49.25451497933657]
The goal of multi-domain learning is to produce a single model performing a task in all the domains together.
Recent works showed how we can address this problem by masking the internal weights of a given original conv-net through learned binary variables.
We provide a general formulation of binary mask based models for multi-domain learning by affine transformations of the original network parameters.
arXiv Detail & Related papers (2021-03-25T14:54:37Z) - DeepMerge II: Building Robust Deep Learning Algorithms for Merging
Galaxy Identification Across Domains [0.0]
In astronomy, neural networks are often trained on simulation data with the prospect of being used on telescope observations.
We show that the addition of each domain adaptation technique improves the performance of a classifier when compared to conventional deep learning algorithms.
We demonstrate this on two examples: between two Illustris-1 simulated datasets of distant merging galaxies, and between Illustris-1 simulated data of nearby merging galaxies and observed data from the Sloan Digital Sky Survey.
arXiv Detail & Related papers (2021-03-02T00:24:10Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Adversarial Bipartite Graph Learning for Video Domain Adaptation [50.68420708387015]
Domain adaptation techniques, which focus on adapting models between distributionally different domains, are rarely explored in the video recognition area.
Recent works on visual domain adaptation which leverage adversarial learning to unify the source and target video representations are not highly effective on the videos.
This paper proposes an Adversarial Bipartite Graph (ABG) learning framework which directly models the source-target interactions.
arXiv Detail & Related papers (2020-07-31T03:48:41Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.