Spectral Decomposition Representation for Reinforcement Learning
- URL: http://arxiv.org/abs/2208.09515v1
- Date: Fri, 19 Aug 2022 19:01:30 GMT
- Title: Spectral Decomposition Representation for Reinforcement Learning
- Authors: Tongzheng Ren, Tianjun Zhang, Lisa Lee, Joseph E. Gonzalez, Dale
Schuurmans, Bo Dai
- Abstract summary: We propose an alternative spectral method, Spectral Decomposition Representation (SPEDER), that extracts a state-action abstraction from the dynamics without inducing spurious dependence on the data collection policy.
A theoretical analysis establishes the sample efficiency of the proposed algorithm in both the online and offline settings.
An experimental investigation demonstrates superior performance over current state-of-the-art algorithms across several benchmarks.
- Score: 100.0424588013549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning often plays a critical role in reinforcement learning
by managing the curse of dimensionality. A representative class of algorithms
exploits a spectral decomposition of the stochastic transition dynamics to
construct representations that enjoy strong theoretical properties in an
idealized setting. However, current spectral methods suffer from limited
applicability because they are constructed for state-only aggregation and
derived from a policy-dependent transition kernel, without considering the
issue of exploration. To address these issues, we propose an alternative
spectral method, Spectral Decomposition Representation (SPEDER), that extracts
a state-action abstraction from the dynamics without inducing spurious
dependence on the data collection policy, while also balancing the
exploration-versus-exploitation trade-off during learning. A theoretical
analysis establishes the sample efficiency of the proposed algorithm in both
the online and offline settings. In addition, an experimental investigation
demonstrates superior performance over current state-of-the-art algorithms
across several benchmarks.
Related papers
- Nonstationary Sparse Spectral Permanental Process [24.10531062895964]
We propose a novel approach utilizing the sparse spectral representation of nonstationary kernels.
This technique relaxes the constraints on kernel types and stationarity, allowing for more flexible modeling.
Experimental results on both synthetic and real-world datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-04T16:40:56Z) - Distributed Learning with Discretely Observed Functional Data [1.4583059436979549]
This paper combines distributed spectral algorithms with Sobolev kernels to tackle the functional linear regression problem.
The hypothesis function spaces of the algorithms are the Sobolev spaces generated by the Sobolev kernels.
We derive matching upper and lower bounds for the convergence of the distributed spectral algorithms in the Sobolev norm.
arXiv Detail & Related papers (2024-10-03T10:49:34Z) - Learned Regularization for Inverse Problems: Insights from a Spectral Model [1.4963011898406866]
This chapter provides a theoretically founded investigation of state-of-the-art learning approaches for inverse problems.
We give an extended definition of regularization methods and their convergence in terms of the underlying data distributions.
arXiv Detail & Related papers (2023-12-15T14:50:14Z) - Learning Neural Eigenfunctions for Unsupervised Semantic Segmentation [12.91586050451152]
Spectral clustering is a theoretically grounded solution to it where the spectral embeddings for pixels are computed to construct distinct clusters.
Current approaches still suffer from inefficiencies in spectral decomposition and inflexibility in applying them to the test data.
This work addresses these issues by casting spectral clustering as a parametric approach that employs neural network-based eigenfunctions to produce spectral embeddings.
In practice, the neural eigenfunctions are lightweight and take the features from pre-trained models as inputs, improving training efficiency and unleashing the potential of pre-trained models for dense prediction.
arXiv Detail & Related papers (2023-04-06T03:14:15Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - A Free Lunch from the Noise: Provable and Practical Exploration for
Representation Learning [55.048010996144036]
We show that under some noise assumption, we can obtain the linear spectral feature of its corresponding Markov transition operator in closed-form for free.
We propose Spectral Dynamics Embedding (SPEDE), which breaks the trade-off and completes optimistic exploration for representation learning by exploiting the structure of the noise.
arXiv Detail & Related papers (2021-11-22T19:24:57Z) - Discrete Action On-Policy Learning with Action-Value Critic [72.20609919995086]
Reinforcement learning (RL) in discrete action space is ubiquitous in real-world applications, but its complexity grows exponentially with the action-space dimension.
We construct a critic to estimate action-value functions, apply it on correlated actions, and combine these critic estimated action values to control the variance of gradient estimation.
These efforts result in a new discrete action on-policy RL algorithm that empirically outperforms related on-policy algorithms relying on variance control techniques.
arXiv Detail & Related papers (2020-02-10T04:23:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.