Non-Stationary Representation Learning in Sequential Linear Bandits
- URL: http://arxiv.org/abs/2201.04805v1
- Date: Thu, 13 Jan 2022 06:13:03 GMT
- Title: Non-Stationary Representation Learning in Sequential Linear Bandits
- Authors: Yuzhen Qin, Tommaso Menara, Samet Oymak, ShiNung Ching, and Fabio
Pasqualetti
- Abstract summary: We study representation learning for multi-task decision-making in non-stationary environments.
We propose an online algorithm that facilitates efficient decision-making by learning and transferring non-stationary representations in an adaptive fashion.
- Score: 22.16801879707937
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study representation learning for multi-task
decision-making in non-stationary environments. We consider the framework of
sequential linear bandits, where the agent performs a series of tasks drawn
from distinct sets associated with different environments. The embeddings of
tasks in each set share a low-dimensional feature extractor called
representation, and representations are different across sets. We propose an
online algorithm that facilitates efficient decision-making by learning and
transferring non-stationary representations in an adaptive fashion. We prove
that our algorithm significantly outperforms the existing ones that treat tasks
independently. We also conduct experiments using both synthetic and real data
to validate our theoretical insights and demonstrate the efficacy of our
algorithm.
Related papers
- Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Multi-task Representation Learning for Pure Exploration in Linear
Bandits [34.67303292713379]
We study multi-task representation learning for best arm identification in linear bandits (RepBAI-LB) and best policy identification in contextual linear bandits (RepBPI-CLB)
In these two problems, all tasks share a common low-dimensional linear representation, and our goal is to leverage this feature to accelerate the best arm (policy) identification process for all tasks.
We show that by learning the common representation among tasks, our sample complexity is significantly better than that of the native approach which solves tasks independently.
arXiv Detail & Related papers (2023-02-09T05:14:48Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - On the relationship between disentanglement and multi-task learning [62.997667081978825]
We take a closer look at the relationship between disentanglement and multi-task learning based on hard parameter sharing.
We show that disentanglement appears naturally during the process of multi-task neural network training.
arXiv Detail & Related papers (2021-10-07T14:35:34Z) - Pretext Tasks selection for multitask self-supervised speech
representation learning [23.39079406674442]
This paper introduces a method to select a group of pretext tasks among a set of candidates.
Experiments conducted on speaker recognition and automatic speech recognition validate our approach.
arXiv Detail & Related papers (2021-07-01T16:36:29Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Conditional Meta-Learning of Linear Representations [57.90025697492041]
Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks.
In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information into a representation tailored to the task at hand.
We propose a meta-algorithm capable of leveraging this advantage in practice.
arXiv Detail & Related papers (2021-03-30T12:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.