Topological Social Choice: Designing a Noise-Robust Polar Distance for Persistence Diagrams
- URL: http://arxiv.org/abs/2507.14340v1
- Date: Fri, 18 Jul 2025 19:41:19 GMT
- Title: Topological Social Choice: Designing a Noise-Robust Polar Distance for Persistence Diagrams
- Authors: Athanasios Andrikopoulos, Nikolaos Sampanis,
- Abstract summary: Topological Data Analysis (TDA) has emerged as a powerful framework for extracting robust and interpretable features from noisy data.<n>This work introduces a novel conceptual bridge between these domains by proposing a new metric for persistence diagrams tailored to noisy preference data.<n>We define a polar coordinate-based distance that captures both the magnitude and orientation of topological features in a smooth and differentiable manner.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Topological Data Analysis (TDA) has emerged as a powerful framework for extracting robust and interpretable features from noisy high-dimensional data. In the context of Social Choice Theory, where preference profiles and collective decisions are geometrically rich yet sensitive to perturbations, TDA remains largely unexplored. This work introduces a novel conceptual bridge between these domains by proposing a new metric framework for persistence diagrams tailored to noisy preference data.We define a polar coordinate-based distance that captures both the magnitude and orientation of topological features in a smooth and differentiable manner. Our metric addresses key limitations of classical distances, such as bottleneck and Wasserstein, including instability under perturbation, lack of continuity, and incompatibility with gradient-based learning. The resulting formulation offers improved behavior in both theoretical and applied settings.To the best of our knowledge, this is the first study to systematically apply persistent homology to social choice systems, providing a mathematically grounded method for comparing topological summaries of voting structures and preference dynamics. We demonstrate the superiority of our approach through extensive experiments, including robustness tests and supervised learning tasks, and we propose a modular pipeline for building predictive models from online preference data. This work contributes a conceptually novel and computationally effective tool to the emerging interface of topology and decision theory, opening new directions in interpretable machine learning for political and economic systems.
Related papers
- A statistical physics framework for optimal learning [1.243080988483032]
We combine statistical physics with control theory in a unified theoretical framework to identify optimal protocols in neural network models.<n>We formulate the design of learning protocols as an optimal control problem directly on the dynamics order parameters.<n>This framework encompasses a variety of learning scenarios, optimization constraints, and control budgets.
arXiv Detail & Related papers (2025-07-10T16:39:46Z) - Reinforcement Learning-Based Dynamic Grouping for Tubular Structure Tracking [14.048453741483092]
We propose a novel framework that casts segment-wise tracking as a Markov Decision Process (MDP)<n>Our method leverages Q-Learning to dynamically explore a graph of segments, computing edge weights on-demand and adaptively expanding the search space.<n> Experimental reuslts on typical tubular structure datasets demonstrate that our method significantly outperforms state-of-the-art point-wise and segment-wise approaches.
arXiv Detail & Related papers (2025-06-21T11:00:17Z) - Automated Manifold Learning for Reduced Order Modeling [1.1126342180866644]
We investigate the use of Geometric Representation Learning for the data-driven discovery of system dynamics from spatial-temporal data.<n>We propose to encode similarity structure in such data in a spatial-temporal proximity graph.<n>We apply a range of classical and deep learning-based manifold learning approaches to learn reduced order dynamics.
arXiv Detail & Related papers (2025-06-02T14:49:55Z) - Edge Classification on Graphs: New Directions in Topological Imbalance [53.42066415249078]
We identify a novel Topological Imbalance Issue', which arises from the skewed distribution of edges across different classes.
We introduce Topological Entropy (TE), a novel topological-based metric that measures the topological imbalance for each edge.
We develop two strategies - Topological Reweighting and TE Wedge-based Mixup - to focus training on (synthetic) edges based on their TEs.
arXiv Detail & Related papers (2024-06-17T16:02:36Z) - Nonlinear classification of neural manifolds with contextual information [6.292933471495322]
We introduce a theoretical framework that leverages latent directions in input space, which can be related to contextual information.<n>We derive an exact formula for the context-dependent manifold capacity that depends on manifold geometry and context correlations.<n>Our framework's increased expressivity captures representation reformatting in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis.
arXiv Detail & Related papers (2024-05-10T23:37:31Z) - Structured Optimal Variational Inference for Dynamic Latent Space Models [16.531262817315696]
We consider a latent space model for dynamic networks, where our objective is to estimate the pairwise inner products plus the intercept of the latent positions.
To balance posterior inference and computational scalability, we consider a structured mean-field variational inference framework.
arXiv Detail & Related papers (2022-09-29T22:10:42Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - GANTL: Towards Practical and Real-Time Topology Optimization with
Conditional GANs and Transfer Learning [0.0]
We present a deep learning method based on generative adversarial networks for generative design exploration.
The proposed method combines the generative power of conditional GANs with the knowledge transfer capabilities of transfer learning methods to predict optimal topologies for unseen boundary conditions.
arXiv Detail & Related papers (2021-05-07T03:13:32Z) - Fusing the Old with the New: Learning Relative Camera Pose with
Geometry-Guided Uncertainty [91.0564497403256]
We present a novel framework that involves probabilistic fusion between the two families of predictions during network training.
Our network features a self-attention graph neural network, which drives the learning by enforcing strong interactions between different correspondences.
We propose motion parmeterizations suitable for learning and show that our method achieves state-of-the-art performance on the challenging DeMoN and ScanNet datasets.
arXiv Detail & Related papers (2021-04-16T17:59:06Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.