Generative multitask learning mitigates target-causing confounding
- URL: http://arxiv.org/abs/2202.04136v1
- Date: Tue, 8 Feb 2022 20:42:14 GMT
- Title: Generative multitask learning mitigates target-causing confounding
- Authors: Taro Makino, Krzysztof Geras, Kyunghyun Cho
- Abstract summary: We propose a simple and scalable approach to causal representation learning for multitask learning.
The improvement comes from mitigating unobserved confounders that cause the targets, but not the input.
Our results on the Attributes of People and Taskonomy datasets reflect the conceptual improvement in robustness to prior probability shift.
- Score: 61.21582323566118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a simple and scalable approach to causal representation learning
for multitask learning. Our approach requires minimal modification to existing
ML systems, and improves robustness to prior probability shift. The improvement
comes from mitigating unobserved confounders that cause the targets, but not
the input. We refer to them as target-causing confounders. These confounders
induce spurious dependencies between the input and targets. This poses a
problem for the conventional approach to multitask learning, due to its
assumption that the targets are conditionally independent given the input. Our
proposed approach takes into account the dependency between the targets in
order to alleviate target-causing confounding. All that is required in addition
to usual practice is to estimate the joint distribution of the targets to
switch from discriminative to generative classification, and to predict all
targets jointly. Our results on the Attributes of People and Taskonomy datasets
reflect the conceptual improvement in robustness to prior probability shift.
Related papers
- Multi-Target Multiplicity: Flexibility and Fairness in Target
Specification under Resource Constraints [76.84999501420938]
We introduce a conceptual and computational framework for assessing how the choice of target affects individuals' outcomes.
We show that the level of multiplicity that stems from target variable choice can be greater than that stemming from nearly-optimal models of a single target.
arXiv Detail & Related papers (2023-06-23T18:57:14Z) - Generating Adversarial Examples with Task Oriented Multi-Objective
Optimization [21.220906842166425]
Adversarial training is one of the most efficient methods to improve the model's robustness.
We propose emphTask Oriented MOO to address this issue.
Our principle is to only maintain the goal-achieved tasks, while letting the spend more effort on improving the goal-unachieved tasks.
arXiv Detail & Related papers (2023-04-26T01:30:02Z) - TIDo: Source-free Task Incremental Learning in Non-stationary
Environments [0.0]
Updating a model-based agent to learn new target tasks requires us to store past training data.
Few-shot task incremental learning methods overcome the limitation of labeled target datasets.
We propose a one-shot task incremental learning approach that can adapt to non-stationary source and target tasks.
arXiv Detail & Related papers (2023-01-28T02:19:45Z) - Discrete Factorial Representations as an Abstraction for Goal
Conditioned Reinforcement Learning [99.38163119531745]
We show that applying a discretizing bottleneck can improve performance in goal-conditioned RL setups.
We experimentally prove the expected return on out-of-distribution goals, while still allowing for specifying goals with expressive structure.
arXiv Detail & Related papers (2022-11-01T03:31:43Z) - Optimal Representations for Covariate Shift [18.136705088756138]
We introduce a simple variational objective whose optima are exactly the set of all representations on which risk minimizers are guaranteed to be robust.
Our objectives achieve state-of-the-art results on DomainBed, and give insights into the robustness of recent methods, such as CLIP.
arXiv Detail & Related papers (2021-12-31T21:02:24Z) - Goal-Aware Cross-Entropy for Multi-Target Reinforcement Learning [15.33496710690063]
We propose goal-aware cross-entropy (GACE) loss, that can be utilized in a self-supervised way.
We then devise goal-discriminative attention networks (GDAN) which utilize the goal-relevant information to focus on the given instruction.
arXiv Detail & Related papers (2021-10-25T14:24:39Z) - Adversarial Intrinsic Motivation for Reinforcement Learning [60.322878138199364]
We investigate whether the Wasserstein-1 distance between a policy's state visitation distribution and a target distribution can be utilized effectively for reinforcement learning tasks.
Our approach, termed Adversarial Intrinsic Motivation (AIM), estimates this Wasserstein-1 distance through its dual objective and uses it to compute a supplemental reward function.
arXiv Detail & Related papers (2021-05-27T17:51:34Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.