Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations
- URL: http://arxiv.org/abs/2210.14358v3
- Date: Fri, 6 Oct 2023 17:34:25 GMT
- Title: Multi-Domain Long-Tailed Learning by Augmenting Disentangled
Representations
- Authors: Xinyu Yang, Huaxiu Yao, Allan Zhou, Chelsea Finn
- Abstract summary: There is an inescapable long-tailed class-imbalance issue in many real-world classification problems.
We study this multi-domain long-tailed learning problem and aim to produce a model that generalizes well across all classes and domains.
Built upon a proposed selective balanced sampling strategy, TALLY achieves this by mixing the semantic representation of one example with the domain-associated nuisances of another.
- Score: 80.76164484820818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is an inescapable long-tailed class-imbalance issue in many real-world
classification problems. Current methods for addressing this problem only
consider scenarios where all examples come from the same distribution. However,
in many cases, there are multiple domains with distinct class imbalance. We
study this multi-domain long-tailed learning problem and aim to produce a model
that generalizes well across all classes and domains. Towards that goal, we
introduce TALLY, a method that addresses this multi-domain long-tailed learning
problem. Built upon a proposed selective balanced sampling strategy, TALLY
achieves this by mixing the semantic representation of one example with the
domain-associated nuisances of another, producing a new representation for use
as data augmentation. To improve the disentanglement of semantic
representations, TALLY further utilizes a domain-invariant class prototype that
averages out domain-specific effects. We evaluate TALLY on several benchmarks
and real-world datasets and find that it consistently outperforms other
state-of-the-art methods in both subpopulation and domain shift. Our code and
data have been released at https://github.com/huaxiuyao/TALLY.
Related papers
- Composite Active Learning: Towards Multi-Domain Active Learning with
Theoretical Guarantees [12.316113075760743]
Active learning (AL) aims to improve model performance within a fixed labeling budget by choosing the most informative data points to label.
We propose the first general method, dubbed composite active learning (CAL), for multi-domain AL.
Our theoretical analysis shows that our method achieves a better error bound compared to current AL methods.
arXiv Detail & Related papers (2024-02-03T10:22:18Z) - SALUDA: Surface-based Automotive Lidar Unsupervised Domain Adaptation [62.889835139583965]
We introduce an unsupervised auxiliary task of learning an implicit underlying surface representation simultaneously on source and target data.
As both domains share the same latent representation, the model is forced to accommodate discrepancies between the two sources of data.
Our experiments demonstrate that our method achieves a better performance than the current state of the art, both in real-to-real and synthetic-to-real scenarios.
arXiv Detail & Related papers (2023-04-06T17:36:23Z) - Domain-General Crowd Counting in Unseen Scenarios [25.171343652312974]
Domain shift across crowd data severely hinders crowd counting models to generalize to unseen scenarios.
We introduce a dynamic sub-domain division scheme which divides the source domain into multiple sub-domains.
In order to disentangle domain-invariant information from domain-specific information in image features, we design the domain-invariant and -specific crowd memory modules.
arXiv Detail & Related papers (2022-12-05T19:52:28Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - On Multi-Domain Long-Tailed Recognition, Generalization and Beyond [29.629072761463863]
Multi-Domain Long-Tailed Recognition learns from multi-domain imbalanced data.
We propose BoDA, a theoretically grounded learning strategy that tracks the upper bound of transferability statistics.
As a byproduct, BoDA establishes new state-of-the-art on Domain Generalization benchmarks, improving generalization to unseen domains.
arXiv Detail & Related papers (2022-03-17T17:59:21Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Domain Generalization via Gradient Surgery [5.38147998080533]
In real-life applications, machine learning models often face scenarios where there is a change in data distribution between training and test domains.
In this work, we characterize the conflicting gradients emerging in domain shift scenarios and devise novel gradient agreement strategies.
arXiv Detail & Related papers (2021-08-03T16:49:25Z) - Learning to Combine: Knowledge Aggregation for Multi-Source Domain
Adaptation [56.694330303488435]
We propose a Learning to Combine for Multi-Source Domain Adaptation (LtC-MSDA) framework.
In the nutshell, a knowledge graph is constructed on the prototypes of various domains to realize the information propagation among semantically adjacent representations.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-07-17T07:52:44Z) - Not all domains are equally complex: Adaptive Multi-Domain Learning [98.25886129591974]
We propose an adaptive parameterization approach to deep neural networks for multi-domain learning.
The proposed approach performs on par with the original approach while reducing by far the number of parameters.
arXiv Detail & Related papers (2020-03-25T17:16:00Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.