Generalized Invariant Matching Property via LASSO
- URL: http://arxiv.org/abs/2301.05975v1
- Date: Sat, 14 Jan 2023 21:09:30 GMT
- Title: Generalized Invariant Matching Property via LASSO
- Authors: Kang Du and Yu Xiang
- Abstract summary: In this work, we generalize the invariant matching property by formulating a high-dimensional problem with intrinsic sparsity.
We propose a more robust and computation-efficient algorithm by leveraging a variant of Lasso.
- Score: 19.786769414376323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning under distribution shifts is a challenging task. One principled
approach is to exploit the invariance principle via the structural causal
models. However, the invariance principle is violated when the response is
intervened, making it a difficult setting. In a recent work, the invariant
matching property has been developed to shed light on this scenario and shows
promising performance. In this work, we generalize the invariant matching
property by formulating a high-dimensional problem with intrinsic sparsity. We
propose a more robust and computation-efficient algorithm by leveraging a
variant of Lasso, improving upon the existing algorithms.
Related papers
- Unnatural Algorithms in Machine Learning [0.0]
We show that optimization algorithms with this property can be viewed as discrete approximations of natural gradient descent.
We introduce a simple method of introducing this naturality more generally and examine a number of popular machine learning training algorithms.
arXiv Detail & Related papers (2023-12-07T22:43:37Z) - Domain Generalization In Robust Invariant Representation [10.132611239890345]
In this paper, we investigate the generalization of invariant representations on out-of-distribution data.
We show that the invariant model learns unstructured latent representations that are robust to distribution shifts.
arXiv Detail & Related papers (2023-04-07T00:58:30Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Conditional entropy minimization principle for learning domain invariant
representation features [30.459247038765568]
In this paper, we propose a framework based on the conditional entropy minimization principle to filter out the spurious invariant features.
We show that the proposed approach is closely related to the well-known Information Bottleneck framework.
arXiv Detail & Related papers (2022-01-25T17:02:12Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Provably Strict Generalisation Benefit for Equivariant Models [1.332560004325655]
It is widely believed that engineering a model to be invariant/equivariant improves generalisation.
This paper provides the first provably non-zero improvement in generalisation for invariant/equivariant models.
arXiv Detail & Related papers (2021-02-20T12:47:32Z) - Invariant Integration in Deep Convolutional Feature Space [77.99182201815763]
We show how to incorporate prior knowledge to a deep neural network architecture in a principled manner.
We report state-of-the-art performance on the Rotated-MNIST dataset.
arXiv Detail & Related papers (2020-04-20T09:45:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.