Mapping Cardinality-based Feature Models to Weighted Automata over Featured Multiset Semirings (Extended Version)
- URL: http://arxiv.org/abs/2407.04499v1
- Date: Fri, 5 Jul 2024 13:40:25 GMT
- Title: Mapping Cardinality-based Feature Models to Weighted Automata over Featured Multiset Semirings (Extended Version)
- Authors: Robert Müller, Mathis Weiß, Malte Lochau,
- Abstract summary: Cardinality-based feature models permit to select multiple copies of the same feature.
We propose a behavioral variability modeling formalism for cardinality-based feature models.
- Score: 2.2708009467351844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cardinality-based feature models permit to select multiple copies of the same feature, thus generalizing the notion of product configurations from subsets of Boolean features to multisets of feature instances. This increased expressiveness shapes a-priori infinite and non-convex configuration spaces, which renders established solution-space mappings based on Boolean presence conditions insufficient for cardinality-based feature models. To address this issue, we propose weighted automata over featured multiset semirings as a novel behavioral variability modeling formalism for cardinality-based feature models. The formalism uses multisets over features as a predefined semantic domain for transition weights. It permits to use any algebraic structure forming a proper semiring on multisets to aggregate the weights traversed along paths to map accepted words to multiset configurations. In particular, tropical semirings constitute a promising sub-class with a reasonable trade-off between expressiveness and computational tractability of canonical analysis problems. The formalism is strictly more expressive than featured transition systems, as it enables upper-bound multiplicity constraints depending on the length of words. We provide a tool implementation of the behavioral variability model and present preliminary experimental results showing applicability and computational feasibility of the proposed approach.
Related papers
- GrootVL: Tree Topology is All You Need in State Space Model [66.36757400689281]
GrootVL is a versatile multimodal framework that can be applied to both visual and textual tasks.
Our method significantly outperforms existing structured state space models on image classification, object detection and segmentation.
By fine-tuning large language models, our approach achieves consistent improvements in multiple textual tasks at minor training cost.
arXiv Detail & Related papers (2024-06-04T15:09:29Z) - Continuous Language Model Interpolation for Dynamic and Controllable Text Generation [7.535219325248997]
We focus on the challenging case where the model must dynamically adapt to diverse -- and often changing -- user preferences.
We leverage adaptation methods based on linear weight, casting them as continuous multi-domain interpolators.
We show that varying the weights yields predictable and consistent change in the model outputs.
arXiv Detail & Related papers (2024-04-10T15:55:07Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Multiway Non-rigid Point Cloud Registration via Learned Functional Map
Synchronization [105.14877281665011]
We present SyNoRiM, a novel way to register multiple non-rigid shapes by synchronizing the maps relating learned functions defined on the point clouds.
We demonstrate via extensive experiments that our method achieves a state-of-the-art performance in registration accuracy.
arXiv Detail & Related papers (2021-11-25T02:37:59Z) - Joint Continuous and Discrete Model Selection via Submodularity [1.332560004325655]
In model selection problems for machine learning, the desire for a well-performing model with meaningful structure is typically expressed through a regularized optimization problem.
In many scenarios, however, numerically meaningful structure is specified in some discrete space leading to difficult non optimization problems.
We show how simple continuous or discrete constraints can also be handled for certain problem classes, motivated by robust optimization.
arXiv Detail & Related papers (2021-02-17T21:14:47Z) - Deep Conditional Transformation Models [0.0]
Learning the cumulative distribution function (CDF) of an outcome variable conditional on a set of features remains challenging.
Conditional transformation models provide a semi-parametric approach that allows to model a large class of conditional CDFs.
We propose a novel network architecture, provide details on different model definitions and derive suitable constraints.
arXiv Detail & Related papers (2020-10-15T16:25:45Z) - From Sets to Multisets: Provable Variational Inference for Probabilistic
Integer Submodular Models [82.95892656532696]
Submodular functions have been studied extensively in machine learning and data mining.
In this work, we propose a continuous DR-submodular extension for integer submodular functions.
We formulate a new probabilistic model which is defined through integer submodular functions.
arXiv Detail & Related papers (2020-06-01T22:20:45Z) - Feature Transformation Ensemble Model with Batch Spectral Regularization
for Cross-Domain Few-Shot Classification [66.91839845347604]
We propose an ensemble prediction model by performing diverse feature transformations after a feature extraction network.
We use a batch spectral regularization term to suppress the singular values of the feature matrix during pre-training to improve the generalization ability of the model.
The proposed model can then be fine tuned in the target domain to address few-shot classification.
arXiv Detail & Related papers (2020-05-18T05:31:04Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z) - Deep Multi-Modal Sets [29.983311598563542]
Deep Multi-Modal Sets is a technique that represents a collection of features as an unordered set rather than one long ever-growing fixed-size vector.
We demonstrate a scalable, multi-modal framework that reasons over different modalities to learn various types of tasks.
arXiv Detail & Related papers (2020-03-03T15:48:44Z) - Representing Unordered Data Using Complex-Weighted Multiset Automata [23.68657135308002]
We show how the multiset representations of certain existing neural architectures can be viewed as special cases of ours.
Namely, we provide a new theoretical and intuitive justification for the Transformer model's representation of positions using sinusoidal functions.
We extend the DeepSets model to use complex numbers, enabling it to outperform the existing model on an extension of one of their tasks.
arXiv Detail & Related papers (2020-01-02T20:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.