Data-Driven Knowledge Transfer in Batch $Q^*$ Learning
- URL: http://arxiv.org/abs/2404.15209v1
- Date: Mon, 1 Apr 2024 02:20:09 GMT
- Title: Data-Driven Knowledge Transfer in Batch $Q^*$ Learning
- Authors: Elynn Chen, Xi Chen, Wenbo Jing,
- Abstract summary: We explore knowledge transfer in dynamic decision-making by concentrating on batch stationary environments.
We propose a framework of Transferred Fitted $Q$-Iteration algorithm with general function approximation.
We show that the final learning error of the $Q*$ function is significantly improved from the single task rate.
- Score: 5.6665432569907646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In data-driven decision-making in marketing, healthcare, and education, it is desirable to utilize a large amount of data from existing ventures to navigate high-dimensional feature spaces and address data scarcity in new ventures. We explore knowledge transfer in dynamic decision-making by concentrating on batch stationary environments and formally defining task discrepancies through the lens of Markov decision processes (MDPs). We propose a framework of Transferred Fitted $Q$-Iteration algorithm with general function approximation, enabling the direct estimation of the optimal action-state function $Q^*$ using both target and source data. We establish the relationship between statistical performance and MDP task discrepancy under sieve approximation, shedding light on the impact of source and target sample sizes and task discrepancy on the effectiveness of knowledge transfer. We show that the final learning error of the $Q^*$ function is significantly improved from the single task rate both theoretically and empirically.
Related papers
- Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Knowledge Transfer across Multiple Principal Component Analysis Studies [8.602833477729899]
We propose a two-step transfer learning algorithm to extract useful information from multiple source principal component analysis (PCA) studies.
In the first step, we integrate the shared subspace information across multiple studies by a proposed method named as Grassmannian barycenter.
The resulting estimator for the shared subspace from the first step is further utilized to estimate the target private subspace.
arXiv Detail & Related papers (2024-03-12T09:15:12Z) - Curriculum Modeling the Dependence among Targets with Multi-task
Learning for Financial Marketing [26.80709680959278]
We propose a prior information merged model (textbfPIMM) for multiple sequential dependence task learning.
The PIM randomly selects the true label information or the prior task prediction with a soft sampling strategy to transfer to the downstream task during the training.
The offline experimental results on both public and product datasets verify that PIMM outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2023-04-25T07:55:16Z) - Model-based Constrained MDP for Budget Allocation in Sequential
Incentive Marketing [28.395877073390434]
Sequential incentive marketing is an important approach for online businesses to acquire customers, increase loyalty and boost sales.
How to effectively allocate the incentives so as to maximize the return under the budget constraint is less studied in the literature.
We propose an efficient learning algorithm which combines bisection search and model-based planning.
arXiv Detail & Related papers (2023-03-02T08:10:45Z) - An Experimental Design Perspective on Model-Based Reinforcement Learning [73.37942845983417]
In practical applications of RL, it is expensive to observe state transitions from the environment.
We propose an acquisition function that quantifies how much information a state-action pair would provide about the optimal solution to a Markov decision process.
arXiv Detail & Related papers (2021-12-09T23:13:57Z) - Practical Transferability Estimation for Image Classification Tasks [20.07223947190349]
A major challenge is how to make transfereability estimation robust under the cross-domain cross-task settings.
The recently proposed OTCE score solves this problem by considering both domain and task differences.
We propose a practical transferability metric called JC-NCE score that dramatically improves the robustness of the task difference estimation.
arXiv Detail & Related papers (2021-06-19T11:59:11Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Exploiting Submodular Value Functions For Scaling Up Active Perception [60.81276437097671]
In active perception tasks, agent aims to select sensory actions that reduce uncertainty about one or more hidden variables.
Partially observable Markov decision processes (POMDPs) provide a natural model for such problems.
As the number of sensors available to the agent grows, the computational cost of POMDP planning grows exponentially.
arXiv Detail & Related papers (2020-09-21T09:11:36Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.