Refine and Purify: Orthogonal Basis Optimization with Null-Space Denoising for Conditional Representation Learning
- URL: http://arxiv.org/abs/2602.05464v1
- Date: Thu, 05 Feb 2026 09:14:44 GMT
- Title: Refine and Purify: Orthogonal Basis Optimization with Null-Space Denoising for Conditional Representation Learning
- Authors: Jiaquan Wang, Yan Lyu, Chen Li, Yuheng Jia,
- Abstract summary: Conditional representation learning aims to extract criterion-specific features for customized tasks.<n>We propose OD-CRL, a novel framework integrating Adaptive Orthogonal Basis Optimization (AOBO) and Null-Space Denoising Projection (NSDP)<n>Experiments conducted across customized clustering, customized classification, and customized retrieval tasks demonstrate that OD-CRL achieves a new state-of-the-art performance with superior generalization.
- Score: 34.87088239089728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conditional representation learning aims to extract criterion-specific features for customized tasks. Recent studies project universal features onto the conditional feature subspace spanned by an LLM-generated text basis to obtain conditional representations. However, such methods face two key limitations: sensitivity to subspace basis and vulnerability to inter-subspace interference. To address these challenges, we propose OD-CRL, a novel framework integrating Adaptive Orthogonal Basis Optimization (AOBO) and Null-Space Denoising Projection (NSDP). Specifically, AOBO constructs orthogonal semantic bases via singular value decomposition with a curvature-based truncation. NSDP suppresses non-target semantic interference by projecting embeddings onto the null space of irrelevant subspaces. Extensive experiments conducted across customized clustering, customized classification, and customized retrieval tasks demonstrate that OD-CRL achieves a new state-of-the-art performance with superior generalization.
Related papers
- Monotone Optimisation with Learned Projections [0.0]
Monotone optimisation problems admit specialised global solvers such as the Polyblock Outer Approximation (POA) algorithm.<n>We introduce an algorithm-aware learning approach that integrates learned models into POA by directly predicting its projection primitive via the radial inverse.
arXiv Detail & Related papers (2026-01-28T19:32:04Z) - AGZO: Activation-Guided Zeroth-Order Optimization for LLM Fine-Tuning [8.698253005940503]
We propose Activation-Guided Zeroth-Order optimization (AGZO)<n>Unlike prior methods, AGZO extracts a compact, activation-informed subspace on the fly during the forward pass and restricts perturbations to this low-rank subspace.<n>AGZO consistently outperforms state-of-the-art ZO baselines and significantly narrows the performance gap with first-order fine-tuning.
arXiv Detail & Related papers (2026-01-24T02:28:15Z) - Unifying Search and Recommendation in LLMs via Gradient Multi-Subspace Tuning [33.69176756907003]
Gradient Multi-Subspace Tuning (GEMS) is a novel framework that unifies search and recommendation tasks.<n>We show that GEMS consistently outperforms the state-of-the-art baselines across both search and recommendation tasks.
arXiv Detail & Related papers (2026-01-14T14:03:07Z) - Generalized Decoupled Learning for Enhancing Open-Vocabulary Dense Perception [71.26728044621458]
DeCLIP is a novel framework that enhances CLIP by decoupling the self-attention module to obtain content'' and context'' features respectively.<n>It consistently achieves state-of-the-art performance across a broad spectrum of tasks, including 2D detection and segmentation, 3D instance segmentation, video instance segmentation, and 6D object pose estimation.
arXiv Detail & Related papers (2025-08-15T06:43:51Z) - Regularizing Subspace Redundancy of Low-Rank Adaptation [54.473090597164834]
We propose ReSoRA, a method that explicitly models redundancy between mapping subspaces and adaptively Regularizes Subspace redundancy of Low-Rank Adaptation.<n>Our proposed method consistently facilitates existing state-of-the-art PETL methods across various backbones and datasets in vision-language retrieval and standard visual classification benchmarks.<n>As a training supervision, ReSoRA can be seamlessly integrated into existing approaches in a plug-and-play manner, with no additional inference costs.
arXiv Detail & Related papers (2025-07-28T11:52:56Z) - Towards Generalized Range-View LiDAR Segmentation in Adverse Weather [65.22588361803942]
We identify and analyze the unique challenges that affect the generalization of range-view LiDAR segmentation in severe weather.<n>We propose a modular and lightweight framework that enhances robustness without altering the core architecture of existing models.<n>Our approach significantly improves generalization to adverse weather with minimal inference overhead.
arXiv Detail & Related papers (2025-06-10T16:48:27Z) - Q-function Decomposition with Intervention Semantics with Factored Action Spaces [51.01244229483353]
We consider Q-functions defined over a lower dimensional projected subspace of the original action space, and study the condition for the unbiasedness of decomposed Q-functions.<n>This leads to a general scheme which we call action decomposed reinforcement learning that uses the projected Q-functions to approximate the Q-function in standard model-free reinforcement learning algorithms.
arXiv Detail & Related papers (2025-04-30T05:26:51Z) - Label-independent hyperparameter-free self-supervised single-view deep subspace clustering [0.0]
Deep subspace clustering (DSC) algorithms face several challenges that hinder their widespread adoption across domains.<n>We introduce a novel single-view DSC approach that minimizes a layer-wise self expression loss using a joint representation matrix.<n>We evaluate the proposed method on six datasets representing faces, digits, and objects.
arXiv Detail & Related papers (2025-04-25T08:54:34Z) - Exploring a Principled Framework for Deep Subspace Clustering [9.347670574036563]
We present a Principled fRamewOrk for Deep Subspace Clustering (PRO-DSC)<n>PRO-DSC is designed to learn structured representations and self-expressive coefficients in a unified manner.<n>We prove that the learned optimal representations under certain condition lie on a union of subspaces.
arXiv Detail & Related papers (2025-03-21T16:38:37Z) - Towards Continual Learning Desiderata via HSIC-Bottleneck
Orthogonalization and Equiangular Embedding [55.107555305760954]
We propose a conceptually simple yet effective method that attributes forgetting to layer-wise parameter overwriting and the resulting decision boundary distortion.
Our method achieves competitive accuracy performance, even with absolute superiority of zero exemplar buffer and 1.02x the base model.
arXiv Detail & Related papers (2024-01-17T09:01:29Z) - PARL: A Unified Framework for Policy Alignment in Reinforcement Learning from Human Feedback [106.63518036538163]
We present a novel unified bilevel optimization-based framework, textsfPARL, formulated to address the recently highlighted critical issue of policy alignment in reinforcement learning.
Our framework addressed these concerns by explicitly parameterizing the distribution of the upper alignment objective (reward design) by the lower optimal variable.
Our empirical results substantiate that the proposed textsfPARL can address the alignment concerns in RL by showing significant improvements.
arXiv Detail & Related papers (2023-08-03T18:03:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.