Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks
- URL: http://arxiv.org/abs/2511.12985v1
- Date: Mon, 17 Nov 2025 05:16:07 GMT
- Title: Angular Gradient Sign Method: Uncovering Vulnerabilities in Hyperbolic Networks
- Authors: Minsoo Jo, Dongyoon Yang, Taesup Kim,
- Abstract summary: Adversarial examples in neural networks have been extensively studied in Euclidean geometry.<n>Recent advances in textithyperbolic networks call for a reevaluation of attack strategies in non-Euclidean geometries.<n>We propose a novel adversarial attack that explicitly leverages the geometric properties of hyperbolic space.
- Score: 11.409989603679612
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Adversarial examples in neural networks have been extensively studied in Euclidean geometry, but recent advances in \textit{hyperbolic networks} call for a reevaluation of attack strategies in non-Euclidean geometries. Existing methods such as FGSM and PGD apply perturbations without regard to the underlying hyperbolic structure, potentially leading to inefficient or geometrically inconsistent attacks. In this work, we propose a novel adversarial attack that explicitly leverages the geometric properties of hyperbolic space. Specifically, we compute the gradient of the loss function in the tangent space of hyperbolic space, decompose it into a radial (depth) component and an angular (semantic) component, and apply perturbation derived solely from the angular direction. Our method generates adversarial examples by focusing perturbations in semantically sensitive directions encoded in angular movement within the hyperbolic geometry. Empirical results on image classification, cross-modal retrieval tasks and network architectures demonstrate that our attack achieves higher fooling rates than conventional adversarial attacks, while producing high-impact perturbations with deeper insights into vulnerabilities of hyperbolic embeddings. This work highlights the importance of geometry-aware adversarial strategies in curved representation spaces and provides a principled framework for attacking hierarchical embeddings.
Related papers
- Hyperbolic Graph Neural Networks Under the Microscope: The Role of Geometry-Task Alignment [5.116264249622881]
Hyperbolic Graph Neural Networks (HGNNs) have been widely adopted as a principled choice for representation learning on tree-like graphs.<n>We propose an additional condition of geometry-task alignment, i.e., whether the metric structure of the target follows that of the input graph.<n>We show that HGNNs consistently outperform Euclidean models under such alignment, while their advantage vanishes otherwise.
arXiv Detail & Related papers (2026-02-02T09:01:58Z) - Riemannian Flow Matching for Disentangled Graph Domain Adaptation [51.98961391065951]
Graph Domain Adaptation (GDA) typically uses adversarial learning to align graph embeddings in Euclidean space.<n>DisRFM is a geometry-aware GDA framework that unifies embedding and flow-based transport.
arXiv Detail & Related papers (2026-01-31T11:05:35Z) - Geometry-Aware Backdoor Attacks: Leveraging Curvature in Hyperbolic Embeddings [3.8806403512213787]
Non-Euclidean foundation models place representations in curved spaces such as hyperbolic geometry.<n>Small input changes appear subtle to standard input-space detectors but produce disproportionately large shifts in the model's representation space.<n>We propose a geometry-adaptive trigger and evaluate it across tasks and architectures.
arXiv Detail & Related papers (2025-10-07T19:24:43Z) - Adversarial Attacks on Hyperbolic Networks [14.993556473864228]
This paper proposes hyperbolic alternatives to the commonly used FGM and PGD adversarial attacks.<n>Through interpretable synthetic benchmarks and experiments on existing datasets, we show how the existing and newly proposed attacks differ.<n>We find that these networks suffer from different types of vulnerabilities and that the newly proposed hyperbolic attacks cannot address these differences.
arXiv Detail & Related papers (2024-12-02T13:48:41Z) - Understanding and Mitigating Hyperbolic Dimensional Collapse in Graph Contrastive Learning [70.0681902472251]
We propose a novel contrastive learning framework to learn high-quality graph embeddings in hyperbolic space.<n>Specifically, we design the alignment metric that effectively captures the hierarchical data-invariant information.<n>We show that in the hyperbolic space one has to address the leaf- and height-level uniformity related to properties of trees.
arXiv Detail & Related papers (2023-10-27T15:31:42Z) - Hyperbolic vs Euclidean Embeddings in Few-Shot Learning: Two Sides of
the Same Coin [49.12496652756007]
We show that the best few-shot results are attained for hyperbolic embeddings at a common hyperbolic radius.
In contrast to prior benchmark results, we demonstrate that better performance can be achieved by a fixed-radius encoder equipped with the Euclidean metric.
arXiv Detail & Related papers (2023-09-18T14:51:46Z) - A Unification Framework for Euclidean and Hyperbolic Graph Neural
Networks [8.080621697426997]
Hyperbolic neural networks can effectively capture the inherent hierarchy of graph datasets.
They entangle multiple incongruent (gyro-)vector spaces within a layer, which makes them limited in terms of generalization and scalability.
We propose the Poincare disk model as our search space, and apply all approximations on the disk.
We demonstrate that our model not only leverages the power of Euclidean networks such as interpretability and efficient execution of various model components, but also outperforms both Euclidean and hyperbolic counterparts on various benchmarks.
arXiv Detail & Related papers (2022-06-09T05:33:02Z) - HRCF: Enhancing Collaborative Filtering via Hyperbolic Geometric
Regularization [52.369435664689995]
We introduce a textitHyperbolic Regularization powered Collaborative Filtering (HRCF) and design a geometric-aware hyperbolic regularizer.
Specifically, the proposal boosts optimization procedure via the root alignment and origin-aware penalty.
Our proposal is able to tackle the over-smoothing problem caused by hyperbolic aggregation and also brings the models a better discriminative ability.
arXiv Detail & Related papers (2022-04-18T06:11:44Z) - Canonical foliations of neural networks: application to robustness [0.0]
Deep learning models are known to be vulnerable to adversarial attacks.
We propose a new vision on neural network robustness using Riemannian geometry and foliation theory.
Proposal is illustrated by creating a new adversarial attack that takes into account the curvature of the data space.
arXiv Detail & Related papers (2022-03-02T08:09:57Z) - Defensive Tensorization [113.96183766922393]
We propose tensor defensiveization, an adversarial defence technique that leverages a latent high-order factorization of the network.
We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks.
We validate the versatility of our approach across domains and low-precision architectures by considering an audio task and binary networks.
arXiv Detail & Related papers (2021-10-26T17:00:16Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z) - Hyperbolic Manifold Regression [33.40757136529844]
We consider the problem of performing manifold-valued regression onto an hyperbolic space as an intermediate component for a number of relevant machine learning applications.
We propose a novel perspective on two challenging tasks: 1) hierarchical classification via label embeddings and 2) taxonomy extension of hyperbolic representations.
Our experiments show that the strategy of leveraging the hyperbolic geometry is promising.
arXiv Detail & Related papers (2020-05-28T10:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.