See Less, Drive Better: Generalizable End-to-End Autonomous Driving via Foundation Models Stochastic Patch Selection
- URL: http://arxiv.org/abs/2601.10707v1
- Date: Thu, 15 Jan 2026 18:58:33 GMT
- Title: See Less, Drive Better: Generalizable End-to-End Autonomous Driving via Foundation Models Stochastic Patch Selection
- Authors: Amir Mallak, Erfan Aasi, Shiva Sreeram, Tsun-Hsuan Wang, Daniela Rus, Alaa Maalouf,
- Abstract summary: Recent advances in end-to-end autonomous driving show that policies trained on patch-aligned features generalize better to Out-of-Distribution (OOD)<n>We present $2.4-Patch-Selection (SPS), a simple yet effective approach for learning policies that are more robust, generalizable, and efficient.
- Score: 51.59559387222532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in end-to-end autonomous driving show that policies trained on patch-aligned features extracted from foundation models generalize better to Out-of-Distribution (OOD). We hypothesize that due to the self-attention mechanism, each patch feature implicitly embeds/contains information from all other patches, represented in a different way and intensity, making these descriptors highly redundant. We quantify redundancy in such (BLIP2) features via PCA and cross-patch similarity: $90$% of variance is captured by $17/64$ principal components, and strong inter-token correlations are pervasive. Training on such overlapping information leads the policy to overfit spurious correlations, hurting OOD robustness. We present Stochastic-Patch-Selection (SPS), a simple yet effective approach for learning policies that are more robust, generalizable, and efficient. For every frame, SPS randomly masks a fraction of patch descriptors, not feeding them to the policy model, while preserving the spatial layout of the remaining patches. Thus, the policy is provided with different stochastic but complete views of the (same) scene: every random subset of patches acts like a different, yet still sensible, coherent projection of the world. The policy thus bases its decisions on features that are invariant to which specific tokens survive. Extensive experiments confirm that across all OOD scenarios, our method outperforms the state of the art (SOTA), achieving a $6.2$% average improvement and up to $20.4$% in closed-loop simulations, while being $2.4\times$ faster. We conduct ablations over masking rates and patch-feature reorganization, training and evaluating 9 systems, with 8 of them surpassing prior SOTA. Finally, we show that the same learned policy transfers to a physical, real-world car without any tuning.
Related papers
- BinaryPPO: Efficient Policy Optimization for Binary Classification [10.249166265785686]
Supervised fine-tuning (SFT) is the standard approach for binary classification tasks.<n>We introduce BinaryPPO, a framework that reformulates binary classification as a reward learning problem.<n> BinaryPPO improves accuracy by 40-60 percentage points, reaching up to 99%, substantially supervised baselines.
arXiv Detail & Related papers (2026-02-02T19:22:45Z) - Coverage Improvement and Fast Convergence of On-policy Preference Learning [67.36750525893514]
Online on-policy preference learning algorithms for language model alignment can significantly outperform their offline counterparts.<n>We analyze how the sampling policy's coverage evolves throughout on-policy training.<n>We develop principled on-policy schemes for reward distillation in the general function class setting.
arXiv Detail & Related papers (2026-01-13T10:46:06Z) - Moments Matter:Stabilizing Policy Optimization using Return Distributions [9.430246534202857]
In continuous control tasks, even small parameter shifts can produce unstable gaits.<n>We propose an alternative that takes advantage of environmentality to update-induced variability.
arXiv Detail & Related papers (2026-01-05T05:27:11Z) - Model Predictive Control is almost Optimal for Heterogeneous Restless Multi-armed Bandits [6.402634424631123]
We show that a natural finite-horizon LP-update policy with randomized rounding achieves an $O(log Nsqrt1/N)$ optimality gap in infinite time average reward problems.<n>Our results draw on techniques from the model predictive control literature by invoking the concept of emphdissipativity.
arXiv Detail & Related papers (2025-11-11T10:53:49Z) - AutoPrune: Each Complexity Deserves a Pruning Policy [58.448785378705566]
Complexity Pruning (AutoPrune) is a training-free, plug-and-play framework that tailors pruning policies to varying sample and task complexities.<n>We evaluate AutoPrune on standard vision-Adaptive tasks and on Vision-Language-Action models for autonomous driving.
arXiv Detail & Related papers (2025-09-28T15:09:00Z) - Patch Pruning Strategy Based on Robust Statistical Measures of Attention Weight Diversity in Vision Transformers [0.7673339435080445]
We propose a patch pruning strategy that evaluates the importance of each patch based on the variance of attention weights across multiple attention heads.<n>This approach is inspired by the design of multi-head self-attention, which aims to capture diverse attention patterns across different subspaces of feature representations.
arXiv Detail & Related papers (2025-07-25T11:31:17Z) - Not All Rollouts are Useful: Down-Sampling Rollouts in LLM Reinforcement Learning [55.15106182268834]
Reinforcement learning with verifiable rewards (RLVR) has emerged as the leading approach for enhancing reasoning capabilities in large language models.<n>It faces a fundamental compute and memory asymmetry: rollout generation is embarrassingly parallel and memory-light, whereas policy updates are communication-heavy and memory-intensive.<n>We introduce PODS (Policy Optimization with Down-Sampling), which decouples rollout generation from policy updates by training only on a strategically selected subset of rollouts.
arXiv Detail & Related papers (2025-04-18T17:49:55Z) - Autoregressive Bandits [58.46584210388307]
We propose a novel online learning setting, Autoregressive Bandits, in which the observed reward is governed by an autoregressive process of order $k$.
We show that, under mild assumptions on the reward process, the optimal policy can be conveniently computed.
We then devise a new optimistic regret minimization algorithm, namely, AutoRegressive Upper Confidence Bound (AR-UCB), that suffers sublinear regret of order $widetildemathcalO left( frac(k+1)3/2sqrtnT (1-G
arXiv Detail & Related papers (2022-12-12T21:37:36Z) - Anytime-valid off-policy inference for contextual bandits [34.721189269616175]
Contextual bandit algorithms map observed contexts $X_t$ to actions $A_t$ over time.
It is often of interest to estimate the properties of a hypothetical policy that is different from the logging policy that was used to collect the data.
We present a comprehensive framework for OPE inference that relax unnecessary conditions made in some past works.
arXiv Detail & Related papers (2022-10-19T17:57:53Z) - Efficient Policy Iteration for Robust Markov Decision Processes via
Regularization [49.05403412954533]
Robust decision processes (MDPs) provide a framework to model decision problems where the system dynamics are changing or only partially known.
Recent work established the equivalence between texttts rectangular $L_p$ robust MDPs and regularized MDPs, and derived a regularized policy iteration scheme that enjoys the same level of efficiency as standard MDPs.
In this work, we focus on the policy improvement step and derive concrete forms for the greedy policy and the optimal robust Bellman operators.
arXiv Detail & Related papers (2022-05-28T04:05:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.