Set Transformer Architectures and Synthetic Data Generation for Flow-Guided Nanoscale Localization
- URL: http://arxiv.org/abs/2508.16200v1
- Date: Fri, 22 Aug 2025 08:22:25 GMT
- Title: Set Transformer Architectures and Synthetic Data Generation for Flow-Guided Nanoscale Localization
- Authors: Mika Leo Hube, Filip Lemic, Ethungshan Shitiri, Gerard Calvo Bartra, Sergi Abadal, Xavier Costa Pérez,
- Abstract summary: Flow-guided localization (FGL) enables the identification of spatial regions within the human body that contain an event of diagnostic interest.<n>Existing FGL solutions rely on graph models with fixed topologies or handcrafted features, which limit their adaptability to anatomical variability and hinder scalability.<n>Our formulation treats nanodevices' circulation time reports as unordered sets, enabling permutation-invariant, variable-length input processing without relying on spatial priors.
- Score: 13.521075124606973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Flow-guided Localization (FGL) enables the identification of spatial regions within the human body that contain an event of diagnostic interest. FGL does that by leveraging the passive movement of energy-constrained nanodevices circulating through the bloodstream. Existing FGL solutions rely on graph models with fixed topologies or handcrafted features, which limit their adaptability to anatomical variability and hinder scalability. In this work, we explore the use of Set Transformer architectures to address these limitations. Our formulation treats nanodevices' circulation time reports as unordered sets, enabling permutation-invariant, variable-length input processing without relying on spatial priors. To improve robustness under data scarcity and class imbalance, we integrate synthetic data generation via deep generative models, including CGAN, WGAN, WGAN-GP, and CVAE. These models are trained to replicate realistic circulation time distributions conditioned on vascular region labels, and are used to augment the training data. Our results show that the Set Transformer achieves comparable classification accuracy compared to Graph Neural Networks (GNN) baselines, while simultaneously providing by-design improved generalization to anatomical variability. The findings highlight the potential of permutation-invariant models and synthetic augmentation for robust and scalable nanoscale localization.
Related papers
- Renormalization Group Guided Tensor Network Structure Search [58.0378300612202]
Network structure search (TN-SS) aims to automatically discover optimal network topologies and rank robustness for efficient tensor decomposition in high-dimensional data representation.<n>We propose RGTN (Renormalization Group guided Network search), a physics-inspired framework transforming TN-SS via multi-scale renormalization group flows.
arXiv Detail & Related papers (2025-12-31T06:31:43Z) - Graph Laplacian Transformer with Progressive Sampling for Prostate Cancer Grading [2.9485900021889146]
We propose a Graph Laplacian Attention-Based Transformer (GLAT) integrated with an Iterative Refinement Module (IRM) to enhance both feature learning and spatial consistency.<n>IRM iteratively refines patch selection by leveraging a pretrained ResNet50 for local feature extraction and a foundation model in no-gradient mode for importance scoring.<n>The GLAT models tissue-level connectivity by constructing a graph where patches serve as nodes, ensuring spatial consistency through graph Laplacian constraints.
arXiv Detail & Related papers (2025-12-11T16:55:57Z) - GFocal: A Global-Focal Neural Operator for Solving PDEs on Arbitrary Geometries [5.323843026995587]
Transformer-based neural operators have emerged as promising surrogate solvers for partial differential equations.<n>We propose GFocal, a method that enforces simultaneous global and local feature learning and fusion.<n>Experiments show that GFocal achieves state-of-the-art performance with an average 15.2% relative gain in five out of six benchmarks.
arXiv Detail & Related papers (2025-08-06T14:02:39Z) - Enhancing material behavior discovery using embedding-oriented Physically-Guided Neural Networks with Internal Variables [0.0]
Physically Guided Neural Networks with Internal Variables are SciML tools that use only observable data for training and unravel internal state relations.<n>Despite their potential, these models face challenges in scalability when applied to high-dimensional data such as fine-grid spatial fields or time-evolving systems.<n>We propose some enhancements to the PGNNIV framework that address these scalability limitations through reduced-order modeling techniques.
arXiv Detail & Related papers (2025-08-01T12:33:21Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - GALDS: A Graph-Autoencoder-based Latent Dynamics Surrogate model to predict neurite material transport [1.104960878651584]
We propose a Graph-Autoencoder-based Latent Dynamics Surrogate model to streamline the simulation of material transport in neural trees.<n>Our approach achieves mean relative error of 3% with maximum relative error 8% and demonstrates a 10-fold speed improvement compared to previous surrogate model approaches.
arXiv Detail & Related papers (2025-07-15T00:22:00Z) - GITO: Graph-Informed Transformer Operator for Learning Complex Partial Differential Equations [0.0]
We present a novel graph-informed transformer operator (GITO) architecture for learning complex partial differential equation systems.<n>GITO consists of two main modules: a hybrid graph transformer (HGT) and a transformer neural operator (TNO)<n> Empirical results on benchmark PDE tasks demonstrate that GITO outperforms existing transformer-based neural operators.
arXiv Detail & Related papers (2025-06-16T18:35:45Z) - High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations [51.90920900332569]
Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data.<n>Recent approaches address this by introducing additional features along rigid geometric structures.<n>We propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR)
arXiv Detail & Related papers (2025-06-07T16:45:17Z) - Graph Adapter of EEG Foundation Models for Parameter Efficient Fine Tuning [1.8946099300030472]
We propose EEG-GraphAdapter (EGA), a parameter-efficient fine-tuning (PEFT) approach designed to address these challenges.<n>EGA is integrated into a pre-trained temporal backbone model as a GNN-based module, freezing the backbone and allowing only the adapter to be fine-tuned.<n> Experimental evaluations on two healthcare-related downstream tasks-Major Depressive Disorder (MDD) and Abnormality Detection (TUAB)-show that EGA improves performance by up to 16.1% in F1-score compared with the backbone BENDR model.
arXiv Detail & Related papers (2024-11-25T07:30:52Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Interpretable A-posteriori Error Indication for Graph Neural Network Surrogate Models [0.0]
This work introduces an interpretability enhancement procedure for graph neural networks (GNNs)
The end result is an interpretable GNN model that isolates regions in physical space, corresponding to sub-graphs, that are intrinsically linked to the forecasting task.
The interpretable GNNs can also be used to identify, during inference, graph nodes that correspond to a majority of the anticipated forecasting error.
arXiv Detail & Related papers (2023-11-13T18:37:07Z) - Accelerating Scalable Graph Neural Network Inference with Node-Adaptive
Propagation [80.227864832092]
Graph neural networks (GNNs) have exhibited exceptional efficacy in a diverse array of applications.
The sheer size of large-scale graphs presents a significant challenge to real-time inference with GNNs.
We propose an online propagation framework and two novel node-adaptive propagation methods.
arXiv Detail & Related papers (2023-10-17T05:03:00Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Orthogonal Graph Neural Networks [53.466187667936026]
Graph neural networks (GNNs) have received tremendous attention due to their superiority in learning node representations.
stacking more convolutional layers significantly decreases the performance of GNNs.
We propose a novel Ortho-GConv, which could generally augment the existing GNN backbones to stabilize the model training and improve the model's generalization performance.
arXiv Detail & Related papers (2021-09-23T12:39:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.