DFNN: A Deep Fréchet Neural Network Framework for Learning Metric-Space-Valued Responses
- URL: http://arxiv.org/abs/2510.17072v1
- Date: Mon, 20 Oct 2025 00:57:30 GMT
- Title: DFNN: A Deep Fréchet Neural Network Framework for Learning Metric-Space-Valued Responses
- Authors: Kyum Kim, Yaqing Chen, Paromita Dubey,
- Abstract summary: deep Fr'echet neural networks (DFNNs) are an end-to-end deep learning framework for predicting non-Euclidean responses from Euclidean predictors.<n>We establish a universal approximation theorem for DFNNs, advancing the state-of-the-art of neural network approximation theory.<n> Empirical studies on synthetic distributional and network-valued responses, as well as a real-world application to predicting employment occupational compositions, demonstrate that DFNNs consistently outperform existing methods.
- Score: 0.25778694761493826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regression with non-Euclidean responses -- e.g., probability distributions, networks, symmetric positive-definite matrices, and compositions -- has become increasingly important in modern applications. In this paper, we propose deep Fr\'echet neural networks (DFNNs), an end-to-end deep learning framework for predicting non-Euclidean responses -- which are considered as random objects in a metric space -- from Euclidean predictors. Our method leverages the representation-learning power of deep neural networks (DNNs) to the task of approximating conditional Fr\'echet means of the response given the predictors, the metric-space analogue of conditional expectations, by minimizing a Fr\'echet risk. The framework is highly flexible, accommodating diverse metrics and high-dimensional predictors. We establish a universal approximation theorem for DFNNs, advancing the state-of-the-art of neural network approximation theory to general metric-space-valued responses without making model assumptions or relying on local smoothing. Empirical studies on synthetic distributional and network-valued responses, as well as a real-world application to predicting employment occupational compositions, demonstrate that DFNNs consistently outperform existing methods.
Related papers
- End-to-End Deep Learning for Predicting Metric Space-Valued Outputs [4.855663359344747]
We introduce E2M, a deep learning framework for predicting metric space-valued outputs.<n>E2M performs prediction via a weighted Fr'teche means over training outputs, where the weights are learned by a neural network conditioned on the input.<n>We show that E2M consistently achieves state-of-the-art performance, with its advantages becoming more pronounced at larger sample sizes.
arXiv Detail & Related papers (2025-09-28T00:46:12Z) - Fréchet Cumulative Covariance Net for Deep Nonlinear Sufficient Dimension Reduction with Random Objects [22.156257535146004]
We introduce a new statistical dependence measure termed Fr'echet Cumulative Covariance (FCCov) and develop a novel nonlinear SDR framework based on FCCov.<n>Our approach is not only applicable to complex non-Euclidean data, but also exhibits robustness against outliers.<n>We prove that our method with squared Frobenius norm regularization achieves unbiasedness at the $sigma$-field level.
arXiv Detail & Related papers (2025-02-21T10:55:50Z) - Deep Fréchet Regression [4.915744683251151]
We propose a flexible regression model capable of handling high-dimensional predictors without imposing parametric assumptions.<n>The proposed model outperforms existing methods for non-Euclidean responses.
arXiv Detail & Related papers (2024-07-31T07:54:14Z) - Universal Approximation and the Topological Neural Network [0.0]
A topological neural network (TNN) takes data from a Tychonoff topological space instead of the usual finite dimensional space.
A distributional neural network (DNN) that takes Borel measures as data is also introduced.
arXiv Detail & Related papers (2023-05-26T05:28:10Z) - Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression [37.69303106863453]
The Information Bottleneck (IB) principle offers an information-theoretic framework for analyzing the training process of deep neural networks (DNNs)
In this paper, we introduce a framework for conducting IB analysis of general NNs.
We also perform IB analysis on a close-to-real-scale, which reveals new features of the MI dynamics.
arXiv Detail & Related papers (2023-05-13T21:44:32Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Regression modelling of spatiotemporal extreme U.S. wildfires via
partially-interpretable neural networks [0.0]
We propose a new methodological framework for performing extreme quantile regression using artificial neutral networks.
We unify linear, and additive, regression methodology with deep learning to create partially-interpretable neural networks.
arXiv Detail & Related papers (2022-08-16T07:42:53Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Quasi-orthogonality and intrinsic dimensions as measures of learning and
generalisation [55.80128181112308]
We show that dimensionality and quasi-orthogonality of neural networks' feature space may jointly serve as network's performance discriminants.
Our findings suggest important relationships between the networks' final performance and properties of their randomly initialised feature spaces.
arXiv Detail & Related papers (2022-03-30T21:47:32Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.