Hyperbolic Dual Feature Augmentation for Open-Environment
- URL: http://arxiv.org/abs/2506.08906v1
- Date: Tue, 10 Jun 2025 15:34:09 GMT
- Title: Hyperbolic Dual Feature Augmentation for Open-Environment
- Authors: Peilin Yu, Yuwei Wu, Zhi Gao, Xiaomeng Fan, Shuo Yang, Yunde Jia,
- Abstract summary: We propose a hyperbolic dual feature augmentation method for open-environment, which augments features for both seen and unseen classes in the hyperbolic space.<n>Our method effectively enhances the performance of hyperbolic algorithms in open-environment.
- Score: 41.23999800250096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature augmentation generates novel samples in the feature space, providing an effective way to enhance the generalization ability of learning algorithms with hyperbolic geometry. Most hyperbolic feature augmentation is confined to closed-environment, assuming the number of classes is fixed (\emph{i.e.}, seen classes) and generating features only for these classes. In this paper, we propose a hyperbolic dual feature augmentation method for open-environment, which augments features for both seen and unseen classes in the hyperbolic space. To obtain a more precise approximation of the real data distribution for efficient training, (1) we adopt a neural ordinary differential equation module, enhanced by meta-learning, estimating the feature distributions of both seen and unseen classes; (2) we then introduce a regularizer to preserve the latent hierarchical structures of data in the hyperbolic space; (3) we also derive an upper bound for the hyperbolic dual augmentation loss, allowing us to train a hyperbolic model using infinite augmentations for seen and unseen classes. Extensive experiments on five open-environment tasks: class-incremental learning, few-shot open-set recognition, few-shot learning, zero-shot learning, and general image classification, demonstrate that our method effectively enhances the performance of hyperbolic algorithms in open-environment.
Related papers
- Enhance Hyperbolic Representation Learning via Second-order Pooling [8.798965454017988]
We introduce second-order pooling into hyperbolic representation learning.
It naturally increases the distance between samples without compromising the generalization ability of the input features.
We propose a kernel approximation regularization, which enables the low-dimensional bilinear features to approximate the kernel function well in low-dimensional space.
arXiv Detail & Related papers (2024-10-29T13:17:43Z) - Robust Hyperbolic Learning with Curvature-Aware Optimization [7.89323764547292]
Current hyperbolic learning approaches are prone to overfitting, computationally expensive, and prone to instability.<n>We introduce a novel fine-tunable hyperbolic scaling approach to constrain hyperbolic embeddings reduce approximation errors.<n>Our approach demonstrates consistent improvements across Computer Vision, EEG classification, and hierarchical metric learning tasks.
arXiv Detail & Related papers (2024-05-22T20:30:14Z) - Hyperbolic vs Euclidean Embeddings in Few-Shot Learning: Two Sides of
the Same Coin [49.12496652756007]
We show that the best few-shot results are attained for hyperbolic embeddings at a common hyperbolic radius.
In contrast to prior benchmark results, we demonstrate that better performance can be achieved by a fixed-radius encoder equipped with the Euclidean metric.
arXiv Detail & Related papers (2023-09-18T14:51:46Z) - Hyperbolic Representation Learning: Revisiting and Advancing [43.1661098138936]
We introduce a position-tracking mechanism to scrutinize existing prevalent hlms, revealing that the learned representations are sub-optimal and unsatisfactory.
We propose a simple yet effective method, hyperbolic informed embedding (HIE), by incorporating cost-free hierarchical information deduced from the hyperbolic distance of the node to origin.
Our method achieves a remarkable improvement of up to 21.4% compared to the competing baselines.
arXiv Detail & Related papers (2023-06-15T13:25:39Z) - HMSN: Hyperbolic Self-Supervised Learning by Clustering with Ideal
Prototypes [7.665392786787577]
We use hyperbolic representation space for self-supervised representation learning for prototype-based clustering approaches.
We extend the Masked Siamese Networks to operate on the Poincar'e ball model of hyperbolic space.
Unlike previous methods we project to the hyperbolic space at the output of the encoder network and utilise a hyperbolic projection head to ensure that the representations used for downstream tasks remain hyperbolic.
arXiv Detail & Related papers (2023-05-18T12:38:40Z) - Augmentations in Hypergraph Contrastive Learning: Fabricated and
Generative [126.0985540285981]
We apply the contrastive learning approach from images/graphs (we refer to it as HyperGCL) to improve generalizability of hypergraph neural networks.
We fabricate two schemes to augment hyperedges with higher-order relations encoded, and adopt three augmentation strategies from graph-structured data.
We propose a hypergraph generative model to generate augmented views, and then an end-to-end differentiable pipeline to jointly learn hypergraph augmentations and model parameters.
arXiv Detail & Related papers (2022-10-07T20:12:20Z) - HRCF: Enhancing Collaborative Filtering via Hyperbolic Geometric
Regularization [52.369435664689995]
We introduce a textitHyperbolic Regularization powered Collaborative Filtering (HRCF) and design a geometric-aware hyperbolic regularizer.
Specifically, the proposal boosts optimization procedure via the root alignment and origin-aware penalty.
Our proposal is able to tackle the over-smoothing problem caused by hyperbolic aggregation and also brings the models a better discriminative ability.
arXiv Detail & Related papers (2022-04-18T06:11:44Z) - Enhancing Hyperbolic Graph Embeddings via Contrastive Learning [7.901082408569372]
We propose a novel Hyperbolic Graph Contrastive Learning (HGCL) framework which learns node representations through multiple hyperbolic spaces.
Experimental results on multiple real-world datasets demonstrate the superiority of the proposed HGCL.
arXiv Detail & Related papers (2022-01-21T06:10:05Z) - Robust Large-Margin Learning in Hyperbolic Space [64.42251583239347]
We present the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space.
We provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples.
We prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees.
arXiv Detail & Related papers (2020-04-11T19:11:30Z) - Differentiating through the Fr\'echet Mean [51.32291896926807]
Fr'echet mean is a generalization of the Euclidean mean.
We show how to differentiate through the Fr'echet mean for arbitrary Riemannian manifold.
This fully integrates the Fr'echet mean into the hyperbolic neural network pipeline.
arXiv Detail & Related papers (2020-02-29T19:49:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.