Structural Compositional Function Networks: Interpretable Functional Compositions for Tabular Discovery
- URL: http://arxiv.org/abs/2601.20037v1
- Date: Tue, 27 Jan 2026 20:20:07 GMT
- Title: Structural Compositional Function Networks: Interpretable Functional Compositions for Tabular Discovery
- Authors: Fang Li,
- Abstract summary: We propose Structural Compositional Networks ( StructuralCFN), a novel architecture that imposes a Relation-Aware Inductive Bias via a differentiable structural prior.<n>Our framework enables Structured Knowledge Integration, allowing domain-specific relational priors to be injected directly into the architecture to guide discovery.<n>We evaluate StructuralCFN across a rigorous 10-fold cross-validation suite on 18 benchmarks, demonstrating statistically significant improvements.
- Score: 4.8369208007394215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the ubiquity of tabular data in high-stakes domains, traditional deep learning architectures often struggle to match the performance of gradient-boosted decision trees while maintaining scientific interpretability. Standard neural networks typically treat features as independent entities, failing to exploit the inherent manifold structural dependencies that define tabular distributions. We propose Structural Compositional Function Networks (StructuralCFN), a novel architecture that imposes a Relation-Aware Inductive Bias via a differentiable structural prior. StructuralCFN explicitly models each feature as a mathematical composition of its counterparts through Differentiable Adaptive Gating, which automatically discovers the optimal activation physics (e.g., attention-style filtering vs. inhibitory polarity) for each relationship. Our framework enables Structured Knowledge Integration, allowing domain-specific relational priors to be injected directly into the architecture to guide discovery. We evaluate StructuralCFN across a rigorous 10-fold cross-validation suite on 18 benchmarks, demonstrating statistically significant improvements (p < 0.05) on scientific and clinical datasets (e.g., Blood Transfusion, Ozone, WDBC). Furthermore, StructuralCFN provides Intrinsic Symbolic Interpretability: it recovers the governing "laws" of the data manifold as human-readable mathematical expressions while maintaining a compact parameter footprint (300--2,500 parameters) that is over an order of magnitude (10x--20x) smaller than standard deep baselines.
Related papers
- Behavior Learning (BL): Learning Hierarchical Optimization Structures from Data [17.826786061390962]
Behavior Learning (BL) learns interpretable and identifiable optimization structures from data.<n>BL unifies predictive performance, interpretability, and identifiability, with broad applicability to scientific domains involving optimization.
arXiv Detail & Related papers (2026-02-23T18:59:04Z) - Structural Disentanglement in Bilinear MLPs via Architectural Inductive Bias [0.0]
We argue that failures arise from how models structure their internal representations during training.<n>We show analytically that bilinear parameterizations possess a non-mixing' property under gradient flow conditions.<n>Unlike pointwise nonlinear networks, multiplicative architectures are able to recover true operators aligned with the underlying algebraic structure.
arXiv Detail & Related papers (2026-02-05T13:14:01Z) - Probability Signature: Bridging Data Semantics and Embedding Structure in Language Models [8.87728727154868]
We propose a set of probability signatures that reflect the semantic relationships among tokens.<n>We generalize our work to large language models (LLMs) by training the Qwen2.5 architecture on the subsets of the Pile corpus.
arXiv Detail & Related papers (2025-09-24T13:49:44Z) - Structural Equation-VAE: Disentangled Latent Representations for Tabular Data [4.101599614979332]
We introduce SE-VAE (Structural Equation-Variational Autoencoder), a novel architecture that embeds measurement structure directly into the design of a variational autoencoder.<n>Inspired by structural equation modeling, SE-VAE aligns latent subspaces with known indicator groupings and introduces a global nuisance latent to isolate construct-specific confounding variation.<n> SE-VAE consistently outperforms alternatives in factor recovery, interpretability, and robustness to nuisance variation.
arXiv Detail & Related papers (2025-08-08T14:21:20Z) - AlphaFold Database Debiasing for Robust Inverse Folding [58.792020809180336]
We introduce a Debiasing Structure AutoEncoder (DeSAE) that learns to reconstruct native-like conformations from intentionally corrupted backbone geometries.<n>At inference, applying DeSAE to AFDB structures produces debiased structures that significantly improve inverse folding performance.
arXiv Detail & Related papers (2025-06-10T02:25:31Z) - Deep Copula Classifier: Theory, Consistency, and Empirical Evaluation [0.0]
Deep Copula (DCC) is a class-conditional generative model that separates marginal estimation from dependence modeling.<n>DCC is interpretable, Bayes-consistent, and achieves excess-risk $O(nr/(2r+d))$ for $r$-smooth copulas.
arXiv Detail & Related papers (2025-05-29T02:07:26Z) - Structural Entropy Guided Probabilistic Coding [52.01765333755793]
We propose a novel structural entropy-guided probabilistic coding model, named SEPC.<n>We incorporate the relationship between latent variables into the optimization by proposing a structural entropy regularization loss.<n> Experimental results across 12 natural language understanding tasks, including both classification and regression tasks, demonstrate the superior performance of SEPC.
arXiv Detail & Related papers (2024-12-12T00:37:53Z) - TANGOS: Regularizing Tabular Neural Networks through Gradient
Orthogonalization and Specialization [69.80141512683254]
We introduce Tabular Neural Gradient Orthogonalization and gradient (TANGOS)
TANGOS is a novel framework for regularization in the tabular setting built on latent unit attributions.
We demonstrate that our approach can lead to improved out-of-sample generalization performance, outperforming other popular regularization methods.
arXiv Detail & Related papers (2023-03-09T18:57:13Z) - Discrete Latent Structure in Neural Networks [32.41642110537956]
This text explores three broad strategies for learning with discrete latent structure.
We show how most consist of the same small set of fundamental building blocks, but use them differently, leading to substantially different applicability and properties.
arXiv Detail & Related papers (2023-01-18T12:30:44Z) - Differentiable and Transportable Structure Learning [73.84540901950616]
We introduce D-Struct, which recovers transportability in the discovered structures through a novel architecture and loss function.
Because D-Struct remains differentiable, our method can be easily adopted in existing differentiable architectures.
arXiv Detail & Related papers (2022-06-13T17:50:53Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.