HyperTopo-Adapters: Geometry- and Topology-Aware Segmentation of Leaf Lesions on Frozen Encoders
- URL: http://arxiv.org/abs/2601.06067v1
- Date: Mon, 29 Dec 2025 04:27:26 GMT
- Title: HyperTopo-Adapters: Geometry- and Topology-Aware Segmentation of Leaf Lesions on Frozen Encoders
- Authors: Chimdi Walter Ndubuisi, Toni Kazic,
- Abstract summary: Leaf-lesion segmentation is topology-sensitive; small merges, splits, or false holes can be meaningful descriptors of biochemical pathways.<n>I explore HyperTopo-Adapters, a lightweight, parameter-efficient head trained on top of a frozen vision encoder.<n>Early results show consistent gains in boundary and topology metrics on a Kaggle leaf-lesion dataset.
- Score: 0.14323566945483493
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leaf-lesion segmentation is topology-sensitive: small merges, splits, or false holes can be biologically meaningful descriptors of biochemical pathways, yet they are weakly penalized by standard pixel-wise losses in Euclidean latents. I explore HyperTopo-Adapters, a lightweight, parameter-efficient head trained on top of a frozen vision encoder, which embeds features on a product manifold -- hyperbolic + Euclidean + spherical (H + E + S) -- to encourage hierarchical separation (H), local linear detail (E), and global closure (S). A topology prior complements Dice/BCE in two forms: (i) persistent-homology (PH) distance for evaluation and selection, and (ii) a differentiable surrogate that combines a soft Euler-characteristic match with total variation regularization for stable training. I introduce warm-ups for both the hyperbolic contrastive term and the topology prior, per-sample evaluation of structure-aware metrics (Boundary-F1, Betti errors, PD distance), and a min-PD within top-K Dice rule for checkpoint selection. On a Kaggle leaf-lesion dataset (N=2,940), early results show consistent gains in boundary and topology metrics (reducing Delta beta_1 hole error by 9%) while Dice/IoU remain competitive. The study is diagnostic by design: I report controlled ablations (curvature learning, latent dimensions, contrastive temperature, surrogate settings), and ongoing tests varying encoder strength (ResNet-50, DeepLabV3, DINOv2/v3), input resolution, PH weight, and partial unfreezing of late blocks. The contribution is an open, reproducible train/eval suite (available at https://github.com/ChimdiWalter/HyperTopo-Adapters) that isolates geometric/topological priors and surfaces failure modes to guide stronger, topology-preserving architectures.
Related papers
- Spectral Embedding via Chebyshev Bases for Robust DeepONet Approximation [0.6752538702870791]
SpectralEmbedded DeepONet (SEDNet) is a new variant in which the trunk is driven by a fixed Chebyshev spectral dictionary rather than coordinate inputs.<n>SEDNet consistently achieves the lowest relative L2 errors among DeepONet, FEDONet, and SEDONet with average improvements of about 30-40% over the baseline DeepONet.
arXiv Detail & Related papers (2025-12-09T22:26:29Z) - Provable FDR Control for Deep Feature Selection: Deep MLPs and Beyond [0.0]
We develop a flexible feature selection framework based on deep neural networks that approximately controls the false discovery rate (FDR), a measure of Type-I error.<n>We show that each coordinate of gradient-based feature vector admits a marginal normal approximation, thereby supporting the validity of FDR control.
arXiv Detail & Related papers (2025-12-04T11:46:06Z) - Neural PDE Solvers with Physics Constraints: A Comparative Study of PINNs, DRM, and WANs [1.131316248570352]
Partial equations (PDEs) underpin models across science and engineering, yet analytical solutions are atypical and classical mesh-based solvers can be costly in high dimensions.<n>This dissertation presents a unified comparison of three mesh-free neural PDE solvers, physics-informed neural networks (PINNs), the deep Ritz method (DRM), and weak adversarial networks (WANs), on Poisson problems (up to 5D) and the time-independent Schr"odinger equation in 1D/2D.
arXiv Detail & Related papers (2025-10-09T13:41:51Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - A Unified Algebraic Perspective on Lipschitz Neural Networks [88.14073994459586]
This paper introduces a novel perspective unifying various types of 1-Lipschitz neural networks.
We show that many existing techniques can be derived and generalized via finding analytical solutions of a common semidefinite programming (SDP) condition.
Our approach, called SDP-based Lipschitz Layers (SLL), allows us to design non-trivial yet efficient generalization of convex potential layers.
arXiv Detail & Related papers (2023-03-06T14:31:09Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Polarized Self-Attention: Towards High-quality Pixel-wise Regression [19.2303932008785]
This paper presents the Polarized Self-Attention(PSA) block that incorporates two critical designs towards high-quality pixel-wise regression.
Experimental results show that PSA boosts standard baselines by $2-4$ points, and boosts state-of-the-arts by $1-2$ points on 2D pose estimation and semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-07-02T01:03:11Z) - Robust Implicit Networks via Non-Euclidean Contractions [63.91638306025768]
Implicit neural networks show improved accuracy and significant reduction in memory consumption.
They can suffer from ill-posedness and convergence instability.
This paper provides a new framework to design well-posed and robust implicit neural networks.
arXiv Detail & Related papers (2021-06-06T18:05:02Z) - Bayesian Active Learning by Disagreements: A Geometric Perspective [64.39292542263286]
We present geometric active learning by disagreements (GBALD), a framework that performs BALD on its core-set construction interacting with model uncertainty estimation.
Experiments show that GBALD has slight perturbations to noisy and repeated samples, and outperforms BALD, BatchBALD and other existing deep active learning approaches.
arXiv Detail & Related papers (2021-05-06T09:37:59Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.