Domain Generalization with Small Data
- URL: http://arxiv.org/abs/2402.06150v1
- Date: Fri, 9 Feb 2024 02:59:08 GMT
- Title: Domain Generalization with Small Data
- Authors: Kecheng Chen, Elena Gal, Hong Yan, and Haoliang Li
- Abstract summary: We learn a domain-invariant representation based on the probabilistic framework by mapping each data point into probabilistic embeddings.
Our proposed method can marriage the measurement on the textitdistribution over distributions (i.e., the global perspective alignment) and the distribution-based contrastive semantic alignment.
- Score: 27.040070085669086
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose to tackle the problem of domain generalization in
the context of \textit{insufficient samples}. Instead of extracting latent
feature embeddings based on deterministic models, we propose to learn a
domain-invariant representation based on the probabilistic framework by mapping
each data point into probabilistic embeddings. Specifically, we first extend
empirical maximum mean discrepancy (MMD) to a novel probabilistic MMD that can
measure the discrepancy between mixture distributions (i.e., source domains)
consisting of a series of latent distributions rather than latent points.
Moreover, instead of imposing the contrastive semantic alignment (CSA) loss
based on pairs of latent points, a novel probabilistic CSA loss encourages
positive probabilistic embedding pairs to be closer while pulling other
negative ones apart. Benefiting from the learned representation captured by
probabilistic models, our proposed method can marriage the measurement on the
\textit{distribution over distributions} (i.e., the global perspective
alignment) and the distribution-based contrastive semantic alignment (i.e., the
local perspective alignment). Extensive experimental results on three
challenging medical datasets show the effectiveness of our proposed method in
the context of insufficient data compared with state-of-the-art methods.
Related papers
- Reducing Semantic Ambiguity In Domain Adaptive Semantic Segmentation Via Probabilistic Prototypical Pixel Contrast [7.092718945468069]
Domain adaptation aims to reduce the model degradation on the target domain caused by the domain shift between the source and target domains.
Probabilistic proto-typical pixel contrast (PPPC) is a universal adaptation framework that models each pixel embedding as a probability.
PPPC not only helps to address ambiguity at the pixel level, yielding discriminative representations but also significant improvements in both synthetic-to-real and day-to-night adaptation tasks.
arXiv Detail & Related papers (2024-09-27T08:25:03Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Anomaly Detection Under Uncertainty Using Distributionally Robust
Optimization Approach [0.9217021281095907]
Anomaly detection is defined as the problem of finding data points that do not follow the patterns of the majority.
The one-class Support Vector Machines (SVM) method aims to find a decision boundary to distinguish between normal data points and anomalies.
A distributionally robust chance-constrained model is proposed in which the probability of misclassification is low.
arXiv Detail & Related papers (2023-12-03T06:13:22Z) - Non-Linear Spectral Dimensionality Reduction Under Uncertainty [107.01839211235583]
We propose a new dimensionality reduction framework, called NGEU, which leverages uncertainty information and directly extends several traditional approaches.
We show that the proposed NGEU formulation exhibits a global closed-form solution, and we analyze, based on the Rademacher complexity, how the underlying uncertainties theoretically affect the generalization ability of the framework.
arXiv Detail & Related papers (2022-02-09T19:01:33Z) - Marginalization in Bayesian Networks: Integrating Exact and Approximate
Inference [0.0]
Missing data and hidden variables require calculating the marginal probability distribution of a subset of the variables.
We develop a divide-and-conquer approach using the graphical properties of Bayesian networks.
We present an efficient and scalable algorithm for estimating the marginal probability distribution for categorical variables.
arXiv Detail & Related papers (2021-12-16T21:49:52Z) - Personalized Trajectory Prediction via Distribution Discrimination [78.69458579657189]
Trarimiy prediction is confronted with the dilemma to capture the multi-modal nature of future dynamics.
We present a distribution discrimination (DisDis) method to predict personalized motion patterns.
Our method can be integrated with existing multi-modal predictive models as a plug-and-play module.
arXiv Detail & Related papers (2021-07-29T17:42:12Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.