Causal Manifold Fairness: Enforcing Geometric Invariance in Representation Learning
- URL: http://arxiv.org/abs/2601.03032v1
- Date: Tue, 06 Jan 2026 14:05:22 GMT
- Title: Causal Manifold Fairness: Enforcing Geometric Invariance in Representation Learning
- Authors: Vidhi Rathore,
- Abstract summary: We introduce Causal Manifold Fairness (CMF), a novel framework that bridges causal inference and geometric deep learning.<n>By enforcing constraints on the Jacobian and Hessian of the decoder, CMF ensures that the rules of the latent space are preserved across demographic groups.<n>We validate CMF on synthetic Structural Causal Models (SCMs), demonstrating that it effectively disentangles sensitive geometric warping while preserving task utility.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in machine learning is increasingly critical, yet standard approaches often treat data as static points in a high-dimensional space, ignoring the underlying generative structure. We posit that sensitive attributes (e.g., race, gender) do not merely shift data distributions but causally warp the geometry of the data manifold itself. To address this, we introduce Causal Manifold Fairness (CMF), a novel framework that bridges causal inference and geometric deep learning. CMF learns a latent representation where the local Riemannian geometry, defined by the metric tensor and curvature, remains invariant under counterfactual interventions on sensitive attributes. By enforcing constraints on the Jacobian and Hessian of the decoder, CMF ensures that the rules of the latent space (distances and shapes) are preserved across demographic groups. We validate CMF on synthetic Structural Causal Models (SCMs), demonstrating that it effectively disentangles sensitive geometric warping while preserving task utility, offering a rigorous quantification of the fairness-utility trade-off via geometric metrics.
Related papers
- The Shape of Beliefs: Geometry, Dynamics, and Interventions along Representation Manifolds of Language Models' Posteriors [24.477029700560113]
Large language models (LLMs) represent prompt-conditioned beliefs (posteriors over answers and claims)<n>We study a controlled setting in which Llama-3.2 generates samples from a normal distribution by implicitly inferring its parameters.<n>We find representations of curved "belief manifold" for these parameters form with sufficient in-context learning.
arXiv Detail & Related papers (2026-02-02T16:45:05Z) - Riemannian Flow Matching for Disentangled Graph Domain Adaptation [51.98961391065951]
Graph Domain Adaptation (GDA) typically uses adversarial learning to align graph embeddings in Euclidean space.<n>DisRFM is a geometry-aware GDA framework that unifies embedding and flow-based transport.
arXiv Detail & Related papers (2026-01-31T11:05:35Z) - An approach to Fisher-Rao metric for infinite dimensional non-parametric information geometry [0.6138671548064355]
Being infinite dimensional, non-parametric information geometry has long faced an "intractability barrier"<n>This paper introduces a novel framework to resolve the intractability with an Orthogonal Decomposition of the Tangent Space.<n>By defining the Information Capture Ratio, we provide a rigorous method for estimating intrinsic dimensionality in high-dimensional data.
arXiv Detail & Related papers (2025-12-25T00:18:41Z) - Manifold Percolation: from generative model to Reinforce learning [0.26905021039717986]
Generative modeling is typically framed as learning mapping rules, but from an observer's perspective without access to these rules, the task becomes disentangling the geometric support from the probability distribution.<n>We propose that continuum percolation is uniquely suited to this support analysis, as the sampling process effectively projects high-dimensional density estimation onto a geometric counting problem on the support.
arXiv Detail & Related papers (2025-11-25T17:12:42Z) - GeoGNN: Quantifying and Mitigating Semantic Drift in Text-Attributed Graphs [59.61242815508687]
Graph neural networks (GNNs) on text--attributed graphs (TAGs) encode node texts using pretrained language models (PLMs) and propagate these embeddings through linear neighborhood aggregation.<n>This work introduces a local PCA-based metric that measures the degree of semantic drift and provides the first quantitative framework to analyze how different aggregation mechanisms affect manifold structure.
arXiv Detail & Related papers (2025-11-12T06:48:43Z) - Variational Geometric Information Bottleneck: Learning the Shape of Understanding [0.0]
Variational Geometric Information Bottleneck (V-GIB) is a variational estimator that unifies mutual-information compression and curvature regularization.<n>V-GIB provides a principled and measurable route to representations that are geometrically coherent, data-efficient, and aligned with human-understandable structure.
arXiv Detail & Related papers (2025-11-04T11:33:54Z) - Latent Iterative Refinement Flow: A Geometric-Constrained Approach for Few-Shot Generation [5.062604189239418]
We introduce Latent Iterative Refinement Flow (LIRF), a novel approach to few-shot generation.<n>LIRF establishes a stable latent space using an autoencoder trained with our novel textbfmanifold-preservation loss.<n>Within this cycle, candidate samples are refined by a geometric textbfcorrection operator, a provably contractive mapping.
arXiv Detail & Related papers (2025-09-24T08:57:21Z) - Calibrating Biased Distribution in VFM-derived Latent Space via Cross-Domain Geometric Consistency [52.52950138164424]
We show that when leveraging the off-the-shelf (vision) foundation models for feature extraction, the geometric shapes of the resulting feature distributions exhibit remarkable transferability across domains and datasets.<n>We embody our geometric knowledge-guided distribution calibration framework in two popular and challenging settings: federated learning and long-tailed recognition.<n>In long-tailed learning, it utilizes the geometric knowledge transferred from sample-rich categories to recover the true distribution for sample-scarce tail classes.
arXiv Detail & Related papers (2025-08-19T05:22:59Z) - CP$^2$: Leveraging Geometry for Conformal Prediction via Canonicalization [51.716834831684004]
We study the problem of conformal prediction (CP) under geometric data shifts.<n>We propose integrating geometric information--such as geometric pose--into the conformal procedure to reinstate its guarantees.
arXiv Detail & Related papers (2025-06-19T10:12:02Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Shape And Structure Preserving Differential Privacy [70.08490462870144]
We show how the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
We also show how using the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
arXiv Detail & Related papers (2022-09-21T18:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.